win10 tensorflow gpucpu gpu 哪个好

TensorFlow和GPU硬件加速 | 中国数据分析行业网
中国数据分析行业现状
Datahoop大数据分析平台
入会申请快速指南
1.请您先阅读入会须知[
2.下载入会申请表 [] [
3.通过协会邮箱提交申请表
Email : xiehui@chinacpda.org
4.如果您还有其它疑问,请联系
协会会员处 010-1
5.查看更多会员入会相关信息:
中国商业联合数据分析专业委员会
址:北京市朝阳区朝外大街乙6号朝外SOHO,C座931邮
编:100020总
机:(010)23/
行 政 处:010-5
培 训 处:010-会 员 处:010-1、652Email
:xiehui@chinacpda.org乘 地 铁:东大桥站乘 公 交:关东店站
> TensorFlow和GPU硬件加速
Spark Summit EU重头戏:TensorFlow和GPU硬件加速。
Spark Summit EU 2016
Spark Summit EU 2016 上星期在布鲁塞尔召开,其中大会中的重头戏是Apache Spark 集成深度学习库 TensorFlow、使用结构化的流进行在线学习和GPU硬件加速。
大会第一日最具特色的是预览了由Spark 2.0引入的一个创新。该API是针对DataFrames和Datasets简化了的接口,使其更容易去开发大数据应用。这个第二代的 Tungsten 引擎通过把MPP数据库的理念应用到数据处理查询使处理更接近于硬件了:针对中间数据和以节省空间的面向列方式保存在内存中的数据,生成的字节码充分利用CPU寄存器的能力。
不管API是否使用过,数据操作图都是通过Catalyst Optimizer优化过的,它针对所有集群上的计算指令生成执行计划,并针对每个操作进行优化。
结构化的流,这是作为阿尔法版针对流发布的一个新的高层API,在本次大会中也做了推介。该API集成了Spark的Dataset和DataFrame,使开发人员可以以类似于Spark批量API的方式描述从(到)外部系统的数据读写。它通过以批处理指令的方式编译流处理指令提供了很强的一致性,并使事务型系统可以与存储系统集成在一起(比如HDFS和AWS S3)。
在大会第二天,Databricks的CEO Ali Ghodsi将Spark描绘成了将AI大众化的一款工具,它简化了机器语言算法的数据准备和计算指令的管理。今年早些时候,深度学习类库TensorFlow通过一个称为 TensorFrames的类库集成运行于Spark之上。这个类库允许在DataFrames和TensorFlow之间在运行期传递数据。
数据科学专题召开了一个会议,主要围绕的主题是如何结构化流使机器学习具有弹性,并使其可以做到在线学习,这就有可能做到根据到达的数据去更新一些机器学习模型了,而不是采用一批离线任务去执行模型训练。
最后一个重头戏是在Databricks平台支持GPU和更多深度学习类库集成的公告。GPU的支持是通过像CUDA这样的硬件类库完成的,并可以在Databricks中预先构建它,据说这样集群设置成本就会有更低了。
来源:网络大数据[译]与TensorFlow的第一次接触(六)之并发 - 简书
[译]与TensorFlow的第一次接触(六)之并发
第一版TensorFlow第一版发布于2015年11月,它可以运行在多台服务器的GPU上,同时并在其上面进行训练。2016年2月,更新版中增加了分布式与并发处理。
在本章简短的小节中,我会介绍如何使用GPU。对想深入理解这些设备是如何工作的读者,最后章节中列出了一些参考引用,本书不会讨论分布式版本中的细节,对分布式细节感兴趣的读者,最后章节中同样列出了一些参考引用。GPU的执行环境
如果需要TensorFlow支持GPU,需要安装CudaToolkit 7.0 and CUDNN 6.5 V2。为安装这些环境,建议读者访问cuda的官网来了解安装细节。
TensorFlow中引用这些设备的方式如下:“/cpu:0”: To reference the server’s CPU.“/gpu:0”: The server’s GPU, if only one is available.“/gpu:1”: The second server’s GPU, and so on.
为了追踪操作与tensor被分配到哪个设备上了,我们需要以log_device_placement设置为true作为参数来创建session,示例代码如下:
当读者在自己电脑上测试本代码时,会看如下类似的相关输出:
同时,根据这个输出结果,我们同样可看到每一部分是在哪调度执行的。
如果我们希望一个具体的操作调度到一个具体的设备上执行,而不是由系统自动选择设备,我们可通过tf.device来创建设备上下文,所有在该上下文中的操作都会调度到该设备中。
如果系统中不止一个GPU,默认选择较小编号的GPU。假如我们想在不同的GPU上执行操作,需要显式指明。例如,如果我们想让之前的代码在GPU2上执行,可通过tf.device(“/gpu:2”)来指定:
多个GPU并发
假如我们有不止一个CPU,通常我们想在解决一个问题时并发使用民有GPU。例如,我们可建立模型来将工作分布式分发到多个GPU上,示例如下:
代码与之前的代码功能类似,但是现在有2个GPU来执行乘法计算(为简化示例,两个GPU执行的逻辑一样),稍后CPU执行加法计算。因为log_device_placement设置为true,我们可看到操作是如何在多个设备间分布式分发的:
编码实现多GPU
我们通过一个代码例子为总结这一简短的小节,代码例子与DamienAymeric在Github上分享的类似,n=10时,计算An+Bn,比较1个GPU与2个GPU的执行时间。首先,导入需要库:
通过numpy库用随机值创建两个矩阵:
然后创建两个结构体来存储结果:
接下来,定义matpow()函数如下:
如果在一个GPU上执行这段代码,过程如下:
如果在2个GPU上运行,代码如下:
最后,打印计算时间:
分布式版本TensorFlow
如之前所说,Google在2016年2月开源了分布式版本的TensorFlow,其基于gRPC,一个用来进程间通信的,高性能开源RPC框架(TensorFlow Serving使用同样的协议)。
如果想使用分布式版本,需要自己编译二进制程序,因为目前该库只以源码的形式提供。本书不会讨论分布式版本的细节,如果读者想了解关于分布式的信息,建议访问TensorFlow分布式版本的官网。
跟之前章节类似,本章中的代码同样能在Github上获得。我希望本章节已经描述清楚了如何通过多GPU来加速训练模型。
一枚想做深度学习的架构工程师推荐tensorflow深度学习 GPU配置? - 知乎20被浏览5477分享邀请回答02 条评论分享收藏感谢收起与世界分享知识、经验和见解与TensorFlow类似的项目有哪些?TensorFlow的优缺点是什么?
引言:自TensorFlow于2015年底正式开源,距今已有一年多,不久前,TensorFlow正式版也发布了。这期间TensorFlow不断给人以惊喜,推出了分布式版本,服务框架TensorFlowServing,可视化工具TensorFlow,上层封装TF.Learn,其他语言(Go、Java、Rust、Haskell)的绑定、Windows的支持、JIT编译器XLA、动态计算图框架Fold,以及数不胜数的经典模型在TensorFlow上的实现(InceptionNet、SyntaxNet等)。在这一年多时间,TensorFlow已从初入深度学习框架大战的新星,成为了几近垄断的行业事实标准。
  目前看来,对于人工智能这个领域依然有不少怀疑的声音,但不可否认的是,人工智能仍然是未来的发展趋势。
  本文整理了黄文坚和唐源两位老师在开源中国高手问答中一些与TensorFlow相关的精彩问答,主要分为以下几类。
TensorFlow之入门篇
TensorFlow之性能篇
TensorFlow之适用场景
TensorFlow之实战篇
其他相关的问题
一、TensorFlow之入门篇
1.没接触过,刚了解了一下,这个东西就是把某种东西用数据描述出来,然后用一些样本告诉机器它是什么,或者要对他进行什么操作,训练后,机器就能告诉我们输入的数据是什么,或者自动的进行操作吗?比如输入一堆图片告诉他哪个是猫,以后它就能自动识别猫了;给汽车装上各种传感器采集数据,人开着车操作,一段时间后,它就知道什么情况要怎么操作了,就会自动驾驶了?不知理解得对不对,希望指正。
  对的,你说的是其中一类运用,属于机器学习的概念,但可以做到的还远远不止这些,可以多多关注这个领域。深度学习是机器学习的一个分支。TensorFlow是主要用来进行深度学习应用的框架。
2.我是TensorFlow爱好者,现在正在学习,国内的这方面的资料不多,感谢你们提供的资料。我想问一下,学习TensorFlow有什么学习曲线,有没有什么实战的案例?另外在集群模式支持的是不是友好,和Spark集成是不是友好?或者有没有这方面的规划。
  书中有特别多的实战例子,欢迎购买!至于对Spark集群的友好,你可以了解一下雅虎最近新开源的TensorFlowOnSpark。
3.看了这个题目的一些提问,发现这个TensorFlow技术,学习曲线还是很陡峭,研究的人还是少数,有什么方法可以把学习曲线降低,更容易入门吗?还有学习这个技术,有什么必要的学科基础要求吗?
  可以先通过keras上手,这是一个支持TensorFlow的上层封装。在学习TensorFlow之前,需要有基础的Python编程能力,以及对深度学习有一定了解。不过我现在在和RStudio合作把这个也能放在R里面跑,可以关注一下我的GitHub:。
4.好期待TensorFlow这本书,对于新手看着书入门会有难度吗?要先掌握什么基础知识呢?
  可以先看看TensorFlow中文官方站点的文档。本书对新手难度不高。需要一些基础的Python运用能力,还有一些机器学习基础。书中对深度学习有较多的讲解,所以对深度学习的知识要求不高。
5.作为一名成长在Spring技术栈下的码农,转投TensorFlow的话,这本书适合我们入门么?也想请您在机器学习方向上提供一些指导意见,谢谢。
  完全可以的,可以学习一下基础的Python语法,学习机器学习,深度学习,尝试做一做相关的小应用,也可以看看雅虎最近出的TensorFlowOnSpark,或从sklearn+numpy+pandas开始。
6.请问如果要学习TensorFlow,数学应该掌握到什么程度,高数,线代,积分都学过还需要再学哪些内容?
  如果只是要掌握这门工具,不需要学习太多理论的东西,比如说你如果想利用这门工具来做一些机器学习的运用,我现在做的tf.contrib.learn模块,类似scikit-learn,降低了很多学习的门槛,希望能够帮助到大家。如果想深入做研究的话,你说的这些都是必须要掌握的基础,可以在这基础上多关注一下相关的研究,建立好自己的感兴趣方向。
7.学习TensorFlow需要哪些技术栈,了解TensorFlow需要阅读源码吗?
  如果只是想调用高阶的一些模块做一些应用,基本的Python就够了,如果想在某一块做提升的话,能自己学习读代码是再好不过的了,我一开始参与开源软件的时候也是只懂一些基础,可以积极参与开发和讨论,从这个过程中可以学到很多。如果想掌握底层的一些细节,就需要学好C语言之类的了。
  最底层还有cuda的代码。这个要看自己的需求,是想了解到什么程度,如果只是用来做应用,想要很快出结果,直接看api就好。如果想对性能进行优化,可能需要阅读源码。
8.与TensorFlow类似的项目有哪些?TensorFlow的优点和缺点是?
  还有Caffe、CNTK、MXNet等,在《TensorFlow实战》书中第二章详尽地讲解了TensorFlow与其他学习框架的对比。也可以看这篇文章,摘自书中第2章。
9.TensorFlow只能部署在Linux机器上?
  Mac,Windows,Mobile,RasberryPi都是可以的
二、TensorFlow之性能篇
1.TensoFlow的优点我知道,架构好、跨平台、接口丰富、易部署,而且是大公司的产品。问题就是TensoFlow的性能到底如何,我看过网上几个评测,是不是像以前别人测试中的那样慢的离谱,不管CPU还是GPU跟Torch比都慢不少,(),更有测试评论说TensoFlow比convnetjs慢100倍()。
  我简单了解深度学习的算法有很多,效率也不同,我希望知道的是,在同算法的情况下,TensoFlow到底比其它框架慢多少?毕竟性能也是一个很关键的因素。
  这些评测是很旧的了,新版的TensorFlow没有这个问题。TensorFlow目前可能在全连接的MLP上稍微慢一点,但是后续XLA会解决这个问题。但是其他比如CNN、RNN等,因为大家主要都使用cuDNN,差异不大,性能基本上非常接近的。性能你可以放心,Google内部全部使用这个框架,如果真有性能慢的话,这么多人使用着早就解决了。
2.机器学习中一般分有监督学习和无监督学习,无监督学习下,用TensorFlow来对某个数据集进行学习,那么它识别出来的特征是什么?还有TensorFlow1.0中加入了XLA,我理解为能把代码翻译成特定的GPU或x86-64的运行代码,是不是只有在做代数运算时才会用上XLA?TensorFlow不是已经在底层用cuda的cuDNN库加速了吗,为什么还要用XLA?
  关于无监督学习,书中有讲解。无监督学习在深度学习中一般是自编码器等,提取到的是抽象的高阶特征,去除了噪声。XLA会对几个层叠的操作进行JIT编译。cuda是一门语言,cuDNN是深度学习的库,使用cuda加速也要看是怎么使用它加速,是一层计算执行一次,还是把几层的计算合并在一起执行,XLA做的就是这个,将一些简单的操作编译合并成一个操作。此前TensorFlow训练MLP等网络较慢,使用XLA后有。
3.请问使用TensorFlowOnSpark之后,除了免去数据在HDFS和TensorFlow移动之外,是否能对性能有较好的提升呢?如果不用TensorFlowOnSpark,TensorFlow目前自己的分布式性能是否已经成熟了呢?
  目前TensorFlow的分布式算是比较成熟的,但可能还不是最快的。TensorFlowonSpark应该不能提升分布式的性能,毕竟还经过了一层Spark的通信机制处理。
  应该选择TensorFlow还是Theano?有使用两个库的用户比较一下这两者。比如从编译速度,运行速度,易用性等角度进行比较。
  可以参考这篇文章:也就是我们这本《TensorFlow实战》里面的其中一节:。
三、TensorFlow之适用场景
1.请问TensorFlow在自然语言处理上有没有优势?
  自然语言主要使用RNN、LSTM、GRU等,目前新推出的TensorFLowFold支持DynamicsBatching,计算效率大幅度提升,非常适合做自然语言处理。
2.TensorFlow在实际生产环境中,有什么特别适合的场景呢?
  TensorFlow部署非常方便,可用在Android、iOS等客户端,进行图像识别、人脸识别等任务。常见的CTR预估,推荐等任务,也可以轻松地部署到服务器CPU上。
3.TensorFlow有在生产企业中应用的案例吗?
  在Google用的特别多,所有会用到深度学习的场景,都可以使用TensorFlow,比如搜索、邮件、语音助手、机器翻译、图片标注等等。
4.TensorFlow在大数据行业的应用和运用怎么样?TensorFlow的源码使用了哪些设计模式?
  应用非常广的,谷歌已经在很多项目上用了TensorFlow,比如说Youtubewatchnext,还有很多研究型的项目,谷歌DeepMind以后所有的研究都会使用这个框架。如果对某段代码好奇,可以去参考参考源代码学习学习,很多的设计都是经过内部各种项目和用户的千锤百炼。
  Google内部非常多team在使用TensorFlow,比如搜索、邮件、语音、机器翻译等等。数据越大,深度学习效果越好,而支持分布式的TensorFlow就能发挥越大的作用。
5.最近在学习TensorFlow,发现其分布式有in-graph和between-gragh两种架构模式,请问这两种架构的区别是什么?或者是不是应用场景不同?
  其实一个in-graph就是模型并行,将模型中不同节点分布式地运行;between-graph就是数据并行,同时训练多个batch的数据。要针对神经网络结构来设计,模型并行实现难度较大,而且需要网络中天然存在很多可以并行的节点。因此一般用数据并行的比较多。
6.TensorFlow实现估值网络,作用和意义在哪里?有没有其他的方法实现估值网络?
  估值网络是深度强化学习中的一个模型,可以用来解决常见的强化学习问题,比如下棋,自动玩游戏,机器控制等等。
7.想请问下TF有类似SparkStreaming的模块吗?TF在后端存储上和cassandra或者hdfs的集成上有没有啥需要注意的地方?Spark在集群上依赖Master,然后分发到Worker上,这样的架构感觉不太稳定,不知道TF在分布式是什么架构有没有什么特点?
  目前没有类似Streaming的东西,Spark主要用来做数据处理。TensorFlow则更多是对处理后的数据进行训练和学习。
8.TensorFlow对初学者是否太难了?TensorFlow貌似都是研发要用的,对服务器运维会有哪些改变?
  TensorFlow针对实际生产也是非常好的。应该是所有框架中最适合实际生产环境的,因为有Google强大的工程团队的支持,所以TensorFlow拥有产品级的代码,稳健的质量,还有适合部署的TensorFlowServing。
9.TensorFlow从个体学习研究到实际生产环境应用,有哪些注意事项?
  个人研究的时候没有太多限制,实际上线生成可以使用TensorFlowServing,部署效率比较高。
10.TF的耗能是否可以使其独立工作在离线环境的嵌入式小板上,真正达到可独立的智能机器人。
  可以的,使用TensorFlow的嵌入式设备很多。但做机器人涉及到很多步骤,核心部分都设计了机器学习,图像处理之类的,TensorFlow可以用来搭建那些。
11.互联网应用如何结合TensorFlow,能简单介绍一下吗?
  互联网应用很多都是推荐系统,比如说Youtubewatchnext的推荐系统就是用到了TensorFlow,现在在tf.contrib.learn里面有专门的Estimator来做WideandDeepLearning(可以查看官网上的例子,我们的书中也有更深一些的讲解),大家也都可以用的。
12.不知道有没有针对传统零售行业的实际案例,比如销售预测的案例。
  用深度学习可以做销售预测模型,只要它可以转为一个分类预测的问题。
13.使用TensorFlow的产品有哪些?有比较有代表性的吗?
  可以看看我之前的评论,Youtubewatchnext就是其中一个例子,还有很火的AlphaGo。
四、TensorFlow之实战篇
1.现在在用TensorFlow实现图像分类的例子,参考的是CIFAR-10,输入图片会被随机裁剪为24x24的大小,而且训练效率较慢(用了近20小时,已使用了GPU),是否有其他方法来提高效率?TensorFlow有分布式的处理方法吗,若采用分布式,是否要手动将每一台机器上的训练结果进行合并?若提高裁剪的大小,是否能提高准确率?另外,网上有评论说TensorFlow的C/C++接口没有Caffe友好,这个您怎么看?
  提高裁剪的大小,会降低样本量,准确率不一定提高。训练20多小时是用了多少epoch?可以通过tensorboard观看准确率变化,不一定要训练特别多epoch。TensorFlow有分布式的训练,不需要手动,有比较好用的接口,在《TensorFlow实战》中有详细的例子如何使用分布式版本。TF的C/C++接口很完善,有没有caffe友好这个见仁见智。
2.想问一下TensorFlow和Spark结合的框架,例如TensorFlowonSpark,目前是否已经成熟可用?另外,TensorFlow新版本增加了对JavaAPI的支持,如果不使用Python语言,所有功能都直接使用Java语言进行相关开发是否已经可行?
  JavaAPI目前还不太成熟,很多还有待实现,TensorFlowOnSpark也挺有意思的,可以在现有的Spark/Hadoop分布式集群的基础上部署TensorFlow的程序,这样可以避免数据在已有Spark/Hadoop集群和深度学习集群间移动,HDFS里面的数据能够更好的输入进TensorFlow的程序当中。至于成熟不成熟我就不清楚了,毕竟自己还没有试过,不过稍微看了看雅虎自己有使用。
3.对其他的一些机器学习的库接触过一些,要出一个好的效果,对算法选取和参数设置及调节这些方面,希望能给些建议。算法比较多,该如何从分析维度去选取合适的算法?
我觉得最好的方法就是参加数据科学竞赛,比如说Kaggle,通过融入在大家的讨论当中,实际操作和锻炼,你可以很快的理解各种参数的意义和一些比较好的参数范围。
  对于一般的数值、种类等特征的数据集,XGboost和Lightgbm都有很好的效果。如果你的数据量很大,或者是图片、视频、语音、语言、时间序列,那么使用深度学习将能获得很好的效果。
4.打算做个文章分析类的东东,比如,分析一篇新闻的要素(时间、地点、人物),用TensorFlow应该怎么着手?
  这个问题应该先看看NLP(自然语言处理)相关的内容,TensorFlow是实现你算法的工具。但是前提是你得知道应该使用什么算法。
5.TensorFlow对于分布式GPU支持吗?如何选择TensorFlow和XGboost?
  TensorFlow支持分布式GPU,用于深度学习。XGBoost主要是做gradientboosting这一块,最近也有人贡献了代码使它能够的GPU上跑,可以做一做实验比较一下。毕竟XGboost是经过kaggle用户的千锤百炼,很多都已经能够满足他们的需求了。
6.现在学习TensorFlow有没有合适的数据可以使用的?
  TensorFlow中自带了MNIST和CIFAR数据的下载程序,其他常用的,比如ImageNet,Gigaword等数据集需要自己下载。
7.BNN分类器训练出的曲线是高次多项式吗?
  你说的BNN是指?如果神经网络中没有激活函数,那输出的结果只是输入的线性变换。但是加入了激活函数后,就不是高次多项式了。
五、其他相关的问题
1.TensorFlow的发展趋势是怎么样的?
  会集成越来越多的contrib模块,添加很多方便的上层接口,支持更多的语言绑定。同时新推出的XLA(JIT编译器),Fold(DynamicsBatching)都是未来的大方向。
2.个人开发者做TensorFlow应用和开发有前途吗?还是说数据和资料都在大公司,没有合适的、相当数量的数据喂养是无法训练好模型的?
  不仅仅限制在深度学习领域,现在TensorFlow也提供很多机器学习的Estimators,我贡献的大部分都在这一块,可以了解一下tf.contrib.learn这个模块,书中有很多机器学习的例子。
  另外就是具体要看你做什么任务,当然数据是需要的,但是现在也有很多公开的数据。大公司的数据虽多,但是质量也并不是非常高。
————————
  最后,安利一下两位老师的著作——。本书结合了大量代码实例,深入浅出地介绍了如何使用TensorFlow、深度剖析如何用TensorFlow实现主流神经网络、详述TensorBoard、多GPU并行、分布式并行等组件的使用方法。
                     
  想及时获得更多精彩文章,可在微信中搜索“博文视点”或者扫描下方二维码并关注。
                       当前位置: &
1,723 次阅读 -
作者:武维 来源:InfoQ
深度学习及TensorFlow简介
深度学习目前已经被应用到图像识别,语音识别,自然语言处理,机器翻译等场景并取得了很好的行业应用效果。至今已有数种深度学习框架,如TensorFlow、Caffe、Theano、Torch、MXNet,这些框架都能够支持深度神经网络、卷积神经网络、深度信念网络和递归神经网络等模型。TensorFlow最初由Google Brain团队的研究员和工程师研发,目前已成为GitHub上最受欢迎的机器学习项目。
TensorFlow开源一周年以来,已有500+contributors,以及11000+个commits。目前采用TensorFlow平台,在生产环境下进行深度学习的公司有ARM、Google、UBER、DeepMind、京东等公司。目前谷歌已把TensorFlow应用到很多内部项目,如谷歌语音识别,GMail,谷歌图片搜索等。TensorFlow主要特性有:
使用灵活:TensorFlow是一个灵活的神经网络学习平台,采用图计算模型,支持High-Level的API,支持Python、C++、Go、Java接口。
跨平台:TensorFlow支持CPU和GPU的运算,支持台式机、服务器、移动平台的计算。并从r0.12版本支持Windows平台。
产品化:TensorFlow支持从研究团队快速迁移学习模型到生产团队。实现了研究团队发布模型,生产团队验证模型,构建起了模型研究到生产实践的桥梁。
高性能:TensorFlow中采用了多线程,队列技术以及分布式训练模型,实现了在多CPU、多GPU的环境下分布式训练模型。
本文主要介绍TensorFlow一些关键技术的使用实践,包括TensorFlow变量、TensorFlow应用架构、TensorFlow可视化技术、GPU使用,以及HDFS集成使用。
TensorFlow变量
TensorFlow中的变量在使用前需要被初始化,在模型训练中或训练完成后可以保存或恢复这些变量值。下面介绍如何创建变量,初始化变量,保存变量,恢复变量以及共享变量。
&span class="token comment" spellcheck="true"&#创建模型的权重及偏置&/span&
weights &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&Variable&span class="token punctuation"&(&/span&tf&span class="token punctuation"&.&/span&random_normal&span class="token punctuation"&(&/span&&span class="token punctuation"&[&/span&&span class="token number"&784&/span&&span class="token punctuation"&,&/span& &span class="token number"&200&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& stddev&span class="token operator"&=&/span&&span class="token number"&0.35&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&,&/span& name&span class="token operator"&=&/span&&span class="token string"&"weights"&/span&&span class="token punctuation"&)&/span&
biases &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&Variable&span class="token punctuation"&(&/span&tf&span class="token punctuation"&.&/span&zeros&span class="token punctuation"&(&/span&&span class="token punctuation"&[&/span&&span class="token number"&200&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&,&/span& name&span class="token operator"&=&/span&&span class="token string"&"biases"&/span&&span class="token punctuation"&)&/span&
&span class="token comment" spellcheck="true"&#指定变量所在设备为CPU:0&/span&
&span class="token keyword"&with&/span& tf&span class="token punctuation"&.&/span&device&span class="token punctuation"&(&/span&&span class="token string"&"/cpu:0"&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span&
v &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&Variable&span class="token punctuation"&(&/span&&span class="token punctuation"&.&/span&&span class="token punctuation"&.&/span&&span class="token punctuation"&.&/span&&span class="token punctuation"&)&/span&
&span class="token comment" spellcheck="true"&#初始化模型变量&/span&
init_op &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&global_variables_initializer&span class="token punctuation"&(&/span&&span class="token punctuation"&)&/span&
sess&span class="token operator"&=&/span&tf&span class="token punctuation"&.&/span&Session&span class="token punctuation"&(&/span&&span class="token punctuation"&)&/span&
sess&span class="token punctuation"&.&/span&run&span class="token punctuation"&(&/span&init_op&span class="token punctuation"&)&/span&
&span class="token comment" spellcheck="true"&#保存模型变量,由三个文件组成model.data,model.index,model.meta&/span&
saver &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&train&span class="token punctuation"&.&/span&Saver&span class="token punctuation"&(&/span&&span class="token punctuation"&)&/span&
saver&span class="token punctuation"&.&/span&restore&span class="token punctuation"&(&/span&sess&span class="token punctuation"&,&/span& &span class="token string"&"/tmp/model"&/span&&span class="token punctuation"&)&/span&
&span class="token comment" spellcheck="true"&#恢复模型变量&/span&
saver&span class="token punctuation"&.&/span&restore&span class="token punctuation"&(&/span&sess&span class="token punctuation"&,&/span& &span class="token string"&"/tmp/model"&/span&&span class="token punctuation"&)&/span&
12345678910111213141516171819202122232425262728293031323334353637
&span class="token comment" spellcheck="true"&#创建模型的权重及偏置&/span&&weights &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&Variable&span class="token punctuation"&(&/span&tf&span class="token punctuation"&.&/span&random_normal&span class="token punctuation"&(&/span&&span class="token punctuation"&[&/span&&span class="token number"&784&/span&&span class="token punctuation"&,&/span& &span class="token number"&200&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& stddev&span class="token operator"&=&/span&&span class="token number"&0.35&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&,&/span& name&span class="token operator"&=&/span&&span class="token string"&"weights"&/span&&span class="token punctuation"&)&/span&&biases &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&Variable&span class="token punctuation"&(&/span&tf&span class="token punctuation"&.&/span&zeros&span class="token punctuation"&(&/span&&span class="token punctuation"&[&/span&&span class="token number"&200&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&,&/span& name&span class="token operator"&=&/span&&span class="token string"&"biases"&/span&&span class="token punctuation"&)&/span&&&&&span class="token comment" spellcheck="true"&#指定变量所在设备为CPU:0&/span&&&span class="token keyword"&with&/span& tf&span class="token punctuation"&.&/span&device&span class="token punctuation"&(&/span&&span class="token string"&"/cpu:0"&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span&&&&v &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&Variable&span class="token punctuation"&(&/span&&span class="token punctuation"&.&/span&&span class="token punctuation"&.&/span&&span class="token punctuation"&.&/span&&span class="token punctuation"&)&/span&&&&&span class="token comment" spellcheck="true"&#初始化模型变量&/span&&init_op &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&global_variables_initializer&span class="token punctuation"&(&/span&&span class="token punctuation"&)&/span&&sess&span class="token operator"&=&/span&tf&span class="token punctuation"&.&/span&Session&span class="token punctuation"&(&/span&&span class="token punctuation"&)&/span&&sess&span class="token punctuation"&.&/span&run&span class="token punctuation"&(&/span&init_op&span class="token punctuation"&)&/span&&&&&span class="token comment" spellcheck="true"&#保存模型变量,由三个文件组成model.data,model.index,model.meta&/span&&saver &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&train&span class="token punctuation"&.&/span&Saver&span class="token punctuation"&(&/span&&span class="token punctuation"&)&/span&&saver&span class="token punctuation"&.&/span&restore&span class="token punctuation"&(&/span&sess&span class="token punctuation"&,&/span& &span class="token string"&"/tmp/model"&/span&&span class="token punctuation"&)&/span&&&&&span class="token comment" spellcheck="true"&#恢复模型变量&/span&&saver&span class="token punctuation"&.&/span&restore&span class="token punctuation"&(&/span&sess&span class="token punctuation"&,&/span& &span class="token string"&"/tmp/model"&/span&&span class="token punctuation"&)&/span&
在复杂的深度学习模型中,存在大量的模型变量,并且期望能够一次性地初始化这些变量。TensorFlow提供了tf.variable_scope和tf.get_variable两个API,实现了共享模型变量。tf.get_variable(&name&, &shape&, &initializer&):表示创建或返回指定名称的模型变量,其中name表示变量名称,shape表示变量的维度信息,initializer表示变量的初始化方法。tf.variable_scope(&scope_name&):表示变量所在的命名空间,其中scope_name表示命名空间的名称。共享模型变量使用示例如下:
&span class="token comment" spellcheck="true"&#定义卷积神经网络运算规则,其中weights和biases为共享变量&/span&
&span class="token keyword"&def&/span& &span class="token function"&conv_relu&/span&&span class="token punctuation"&(&/span&input&span class="token punctuation"&,&/span& kernel_shape&span class="token punctuation"&,&/span& bias_shape&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span&
&span class="token comment" spellcheck="true"&# 创建变量"weights".&/span&
weights &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&get_variable&span class="token punctuation"&(&/span&&span class="token string"&"weights"&/span&&span class="token punctuation"&,&/span& kernel_shape&span class="token punctuation"&,&/span& initializer&span class="token operator"&=&/span&tf&span class="token punctuation"&.&/span&random_normal_initializer&span class="token punctuation"&(&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&)&/span&
&span class="token comment" spellcheck="true"&# 创建变量 "biases".&/span&
biases &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&get_variable&span class="token punctuation"&(&/span&&span class="token string"&"biases"&/span&&span class="token punctuation"&,&/span& bias_shape&span class="token punctuation"&,&/span& initializer&span class="token operator"&=&/span&tf&span class="token punctuation"&.&/span&constant_initializer&span class="token punctuation"&(&/span&&span class="token number"&0.0&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&)&/span&
conv &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&nn&span class="token punctuation"&.&/span&conv2d&span class="token punctuation"&(&/span&input&span class="token punctuation"&,&/span& weights&span class="token punctuation"&,&/span& strides&span class="token operator"&=&/span&&span class="token punctuation"&[&/span&&span class="token number"&1&/span&&span class="token punctuation"&,&/span& &span class="token number"&1&/span&&span class="token punctuation"&,&/span& &span class="token number"&1&/span&&span class="token punctuation"&,&/span& &span class="token number"&1&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& padding&span class="token operator"&=&/span&&span class="token string"&'SAME'&/span&&span class="token punctuation"&)&/span&
&span class="token keyword"&return&/span& tf&span class="token punctuation"&.&/span&nn&span class="token punctuation"&.&/span&relu&span class="token punctuation"&(&/span&conv &span class="token operator"&+&/span& biases&span class="token punctuation"&)&/span&
&span class="token comment" spellcheck="true"&#定义卷积层,conv1和conv2为变量命名空间&/span&
&span class="token keyword"&with&/span& tf&span class="token punctuation"&.&/span&variable_scope&span class="token punctuation"&(&/span&&span class="token string"&"conv1"&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span&
&span class="token comment" spellcheck="true"&# 创建变量 "conv1/weights", "conv1/biases".&/span&
relu1 &span class="token operator"&=&/span& conv_relu&span class="token punctuation"&(&/span&input_images&span class="token punctuation"&,&/span& &span class="token punctuation"&[&/span&&span class="token number"&5&/span&&span class="token punctuation"&,&/span& &span class="token number"&5&/span&&span class="token punctuation"&,&/span& &span class="token number"&32&/span&&span class="token punctuation"&,&/span& &span class="token number"&32&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& &span class="token punctuation"&[&/span&&span class="token number"&32&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&)&/span&
&span class="token keyword"&with&/span& tf&span class="token punctuation"&.&/span&variable_scope&span class="token punctuation"&(&/span&&span class="token string"&"conv2"&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span&
&span class="token comment" spellcheck="true"&# 创建变量 "conv2/weights", "conv2/biases".&/span&
relu1 &span class="token operator"&=&/span& conv_relu&span class="token punctuation"&(&/span&relu1&span class="token punctuation"&,&/span& &span class="token punctuation"&[&/span&&span class="token number"&5&/span&&span class="token punctuation"&,&/span& &span class="token number"&5&/span&&span class="token punctuation"&,&/span& &span class="token number"&32&/span&&span class="token punctuation"&,&/span& &span class="token number"&32&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& &span class="token punctuation"&[&/span&&span class="token number"&32&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&)&/span&
12345678910111213141516171819202122232425262728293031
&span class="token comment" spellcheck="true"&#定义卷积神经网络运算规则,其中weights和biases为共享变量&/span&&&span class="token keyword"&def&/span& &span class="token function"&conv_relu&/span&&span class="token punctuation"&(&/span&input&span class="token punctuation"&,&/span& kernel_shape&span class="token punctuation"&,&/span& bias_shape&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span&&&&&&&span class="token comment" spellcheck="true"&# 创建变量"weights".&/span&&&&&&weights &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&get_variable&span class="token punctuation"&(&/span&&span class="token string"&"weights"&/span&&span class="token punctuation"&,&/span& kernel_shape&span class="token punctuation"&,&/span& initializer&span class="token operator"&=&/span&tf&span class="token punctuation"&.&/span&random_normal_initializer&span class="token punctuation"&(&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&)&/span&&&&&&&span class="token comment" spellcheck="true"&# 创建变量 "biases".&/span&&&&&&biases &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&get_variable&span class="token punctuation"&(&/span&&span class="token string"&"biases"&/span&&span class="token punctuation"&,&/span& bias_shape&span class="token punctuation"&,&/span& initializer&span class="token operator"&=&/span&tf&span class="token punctuation"&.&/span&constant_initializer&span class="token punctuation"&(&/span&&span class="token number"&0.0&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&)&/span&&&&&&conv &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&nn&span class="token punctuation"&.&/span&conv2d&span class="token punctuation"&(&/span&input&span class="token punctuation"&,&/span& weights&span class="token punctuation"&,&/span& strides&span class="token operator"&=&/span&&span class="token punctuation"&[&/span&&span class="token number"&1&/span&&span class="token punctuation"&,&/span& &span class="token number"&1&/span&&span class="token punctuation"&,&/span& &span class="token number"&1&/span&&span class="token punctuation"&,&/span& &span class="token number"&1&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& padding&span class="token operator"&=&/span&&span class="token string"&'SAME'&/span&&span class="token punctuation"&)&/span&&&&&&&span class="token keyword"&return&/span& tf&span class="token punctuation"&.&/span&nn&span class="token punctuation"&.&/span&relu&span class="token punctuation"&(&/span&conv &span class="token operator"&+&/span& biases&span class="token punctuation"&)&/span&&&&&span class="token comment" spellcheck="true"&#定义卷积层,conv1和conv2为变量命名空间&/span&&&span class="token keyword"&with&/span& tf&span class="token punctuation"&.&/span&variable_scope&span class="token punctuation"&(&/span&&span class="token string"&"conv1"&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span&&&&&&&span class="token comment" spellcheck="true"&# 创建变量 "conv1/weights", "conv1/biases".&/span&&&&&&relu1 &span class="token operator"&=&/span& conv_relu&span class="token punctuation"&(&/span&input_images&span class="token punctuation"&,&/span& &span class="token punctuation"&[&/span&&span class="token number"&5&/span&&span class="token punctuation"&,&/span& &span class="token number"&5&/span&&span class="token punctuation"&,&/span& &span class="token number"&32&/span&&span class="token punctuation"&,&/span& &span class="token number"&32&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& &span class="token punctuation"&[&/span&&span class="token number"&32&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&)&/span&&&span class="token keyword"&with&/span& tf&span class="token punctuation"&.&/span&variable_scope&span class="token punctuation"&(&/span&&span class="token string"&"conv2"&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span&&&&&&&span class="token comment" spellcheck="true"&# 创建变量 "conv2/weights", "conv2/biases".&/span&&&&&&relu1 &span class="token operator"&=&/span& conv_relu&span class="token punctuation"&(&/span&relu1&span class="token punctuation"&,&/span& &span class="token punctuation"&[&/span&&span class="token number"&5&/span&&span class="token punctuation"&,&/span& &span class="token number"&5&/span&&span class="token punctuation"&,&/span& &span class="token number"&32&/span&&span class="token punctuation"&,&/span& &span class="token number"&32&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& &span class="token punctuation"&[&/span&&span class="token number"&32&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&)&/span&
TensorFlow应用架构
TensorFlow的应用架构主要包括模型构建,模型训练,及模型评估三个方面。模型构建主要指构建深度学习神经网络,模型训练主要指在TensorFlow会话中对训练数据执行神经网络运算,模型评估主要指根据测试数据评估模型精确度。如下图所示:
网络模型,损失方程,模型训练操作定义示例如下:
&span class="token comment" spellcheck="true"&#两个隐藏层,一个logits输出层&/span&
hidden1 &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&nn&span class="token punctuation"&.&/span&relu&span class="token punctuation"&(&/span&tf&span class="token punctuation"&.&/span&matmul&span class="token punctuation"&(&/span&images&span class="token punctuation"&,&/span& weights&span class="token punctuation"&)&/span& &span class="token operator"&+&/span& biases&span class="token punctuation"&)&/span&
hidden2 &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&nn&span class="token punctuation"&.&/span&relu&span class="token punctuation"&(&/span&tf&span class="token punctuation"&.&/span&matmul&span class="token punctuation"&(&/span&hidden1&span class="token punctuation"&,&/span& weights&span class="token punctuation"&)&/span& &span class="token operator"&+&/span& biases&span class="token punctuation"&)&/span&
logits &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&matmul&span class="token punctuation"&(&/span&hidden2&span class="token punctuation"&,&/span& weights&span class="token punctuation"&)&/span& &span class="token operator"&+&/span& biases
&span class="token comment" spellcheck="true"&#损失方程,采用softmax交叉熵算法&/span&
cross_entropy &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&nn&span class="token punctuation"&.&/span&sparse_softmax_cross_entropy_with_logits&span class="token punctuation"&(&/span& logits&span class="token punctuation"&,&/span& labels&span class="token punctuation"&,&/span& name&span class="token operator"&=&/span&&span class="token string"&'xentropy'&/span&&span class="token punctuation"&)&/span&
loss &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&reduce_mean&span class="token punctuation"&(&/span&cross_entropy&span class="token punctuation"&,&/span& name&span class="token operator"&=&/span&&span class="token string"&'xentropy_mean'&/span&&span class="token punctuation"&)&/span&
&span class="token comment" spellcheck="true"&#选定优化算法及定义训练操作&/span&
optimizer &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&train&span class="token punctuation"&.&/span&GradientDescentOptimizer&span class="token punctuation"&(&/span&learning_rate&span class="token punctuation"&)&/span&
global_step &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&Variable&span class="token punctuation"&(&/span&&span class="token number"&0&/span&&span class="token punctuation"&,&/span& name&span class="token operator"&=&/span&&span class="token string"&'global_step'&/span&&span class="token punctuation"&,&/span& trainable&span class="token operator"&=&/span&&span class="token boolean"&False&/span&&span class="token punctuation"&)&/span&
train_op &span class="token operator"&=&/span& optimizer&span class="token punctuation"&.&/span&minimize&span class="token punctuation"&(&/span&loss&span class="token punctuation"&,&/span& global_step&span class="token operator"&=&/span&global_step&span class="token punctuation"&)&/span&
模型训练及模型验证示例如下:
&span class="token comment" spellcheck="true"&#加载训练数据,并执行网络训练&/span&
&span class="token keyword"&for&/span& step &span class="token keyword"&in&/span& xrange&span class="token punctuation"&(&/span&FLAGS&span class="token punctuation"&.&/span&max_steps&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span&
feed_dict &span class="token operator"&=&/span& fill_feed_dict&span class="token punctuation"&(&/span&data_sets&span class="token punctuation"&.&/span&train&span class="token punctuation"&,&/span& images_placeholder&span class="token punctuation"&,&/span& labels_placeholder&span class="token punctuation"&)&/span&
_&span class="token punctuation"&,&/span& loss_value &span class="token operator"&=&/span& sess&span class="token punctuation"&.&/span&run&span class="token punctuation"&(&/span&&span class="token punctuation"&[&/span&train_op&span class="token punctuation"&,&/span& loss&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& feed_dict&span class="token operator"&=&/span&feed_dict&span class="token punctuation"&)&/span&
&span class="token comment" spellcheck="true"&#加载测试数据,计算模型精确度&/span&
&span class="token keyword"&for&/span& step &span class="token keyword"&in&/span& xrange&span class="token punctuation"&(&/span&steps_per_epoch&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span&
feed_dict &span class="token operator"&=&/span& fill_feed_dict&span class="token punctuation"&(&/span&data_set&span class="token punctuation"&,&/span& images_placeholder&span class="token punctuation"&,&/span& labels_placeholder&span class="token punctuation"&)&/span&
true_count &span class="token operator"&+=&/span& sess&span class="token punctuation"&.&/span&run&span class="token punctuation"&(&/span&eval_correct&span class="token punctuation"&,&/span& feed_dict&span class="token operator"&=&/span&feed_dict&span class="token punctuation"&)&/span&
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
&span class="token comment" spellcheck="true"&#两个隐藏层,一个logits输出层&/span&&hidden1 &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&nn&span class="token punctuation"&.&/span&relu&span class="token punctuation"&(&/span&tf&span class="token punctuation"&.&/span&matmul&span class="token punctuation"&(&/span&images&span class="token punctuation"&,&/span& weights&span class="token punctuation"&)&/span& &span class="token operator"&+&/span& biases&span class="token punctuation"&)&/span&&hidden2 &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&nn&span class="token punctuation"&.&/span&relu&span class="token punctuation"&(&/span&tf&span class="token punctuation"&.&/span&matmul&span class="token punctuation"&(&/span&hidden1&span class="token punctuation"&,&/span& weights&span class="token punctuation"&)&/span& &span class="token operator"&+&/span& biases&span class="token punctuation"&)&/span&&logits &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&matmul&span class="token punctuation"&(&/span&hidden2&span class="token punctuation"&,&/span& weights&span class="token punctuation"&)&/span& &span class="token operator"&+&/span& biases&&&&span class="token comment" spellcheck="true"&#损失方程,采用softmax交叉熵算法&/span&&cross_entropy &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&nn&span class="token punctuation"&.&/span&sparse_softmax_cross_entropy_with_logits&span class="token punctuation"&(&/span& logits&span class="token punctuation"&,&/span& labels&span class="token punctuation"&,&/span& name&span class="token operator"&=&/span&&span class="token string"&'xentropy'&/span&&span class="token punctuation"&)&/span&&loss &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&reduce_mean&span class="token punctuation"&(&/span&cross_entropy&span class="token punctuation"&,&/span& name&span class="token operator"&=&/span&&span class="token string"&'xentropy_mean'&/span&&span class="token punctuation"&)&/span&&&&&span class="token comment" spellcheck="true"&#选定优化算法及定义训练操作&/span&&optimizer &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&train&span class="token punctuation"&.&/span&GradientDescentOptimizer&span class="token punctuation"&(&/span&learning_rate&span class="token punctuation"&)&/span&&global_step &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&Variable&span class="token punctuation"&(&/span&&span class="token number"&0&/span&&span class="token punctuation"&,&/span& name&span class="token operator"&=&/span&&span class="token string"&'global_step'&/span&&span class="token punctuation"&,&/span& trainable&span class="token operator"&=&/span&&span class="token boolean"&False&/span&&span class="token punctuation"&)&/span&&train_op &span class="token operator"&=&/span& optimizer&span class="token punctuation"&.&/span&minimize&span class="token punctuation"&(&/span&loss&span class="token punctuation"&,&/span& global_step&span class="token operator"&=&/span&global_step&span class="token punctuation"&)&/span&&&&模型训练及模型验证示例如下:&&span class="token comment" spellcheck="true"&#加载训练数据,并执行网络训练&/span&&&span class="token keyword"&for&/span& step &span class="token keyword"&in&/span& xrange&span class="token punctuation"&(&/span&FLAGS&span class="token punctuation"&.&/span&max_steps&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span&&&&&&feed_dict &span class="token operator"&=&/span& fill_feed_dict&span class="token punctuation"&(&/span&data_sets&span class="token punctuation"&.&/span&train&span class="token punctuation"&,&/span& images_placeholder&span class="token punctuation"&,&/span& labels_placeholder&span class="token punctuation"&)&/span&&&&&&_&span class="token punctuation"&,&/span& loss_value &span class="token operator"&=&/span& sess&span class="token punctuation"&.&/span&run&span class="token punctuation"&(&/span&&span class="token punctuation"&[&/span&train_op&span class="token punctuation"&,&/span& loss&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& feed_dict&span class="token operator"&=&/span&feed_dict&span class="token punctuation"&)&/span&&&&&span class="token comment" spellcheck="true"&#加载测试数据,计算模型精确度&/span&&&span class="token keyword"&for&/span& step &span class="token keyword"&in&/span& xrange&span class="token punctuation"&(&/span&steps_per_epoch&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span&&&&&&feed_dict &span class="token operator"&=&/span& fill_feed_dict&span class="token punctuation"&(&/span&data_set&span class="token punctuation"&,&/span& images_placeholder&span class="token punctuation"&,&/span& labels_placeholder&span class="token punctuation"&)&/span&&&&&&true_count &span class="token operator"&+=&/span& sess&span class="token punctuation"&.&/span&run&span class="token punctuation"&(&/span&eval_correct&span class="token punctuation"&,&/span& feed_dict&span class="token operator"&=&/span&feed_dict&span class="token punctuation"&)&/span&
TensorFlow可视化技术
大规模的深度神经网络运算模型是非常复杂的,并且不容易理解运算过程。为了易于理解、调试及优化神经网络运算模型,数据科学家及应用开发人员可以使用TensorFlow可视化组件:TensorBoard。TensorBoard主要支持TensorFlow模型可视化展示及统计信息的图表展示。TensorBoard应用架构如下:
TensorFlow可视化技术主要分为两部分:TensorFlow摘要模型及TensorBoard可视化组件。在摘要模型中,需要把模型变量或样本数据转换为TensorFlow summary操作,然后合并summary操作,最后通过Summary Writer操作写入TensorFlow的事件日志。TensorBoard通过读取事件日志,进行相关摘要信息的可视化展示,主要包括:Scalar图、图片数据可视化、声音数据展示、图模型可视化,以及变量数据的直方图和概率分布图。TensorFlow可视化技术的关键流程如下所示:
&span class="token comment" spellcheck="true"&#定义变量及训练数据的摘要操作&/span&
tf&span class="token punctuation"&.&/span&summary&span class="token punctuation"&.&/span&scalar&span class="token punctuation"&(&/span&&span class="token string"&'max'&/span&&span class="token punctuation"&,&/span& tf&span class="token punctuation"&.&/span&reduce_max&span class="token punctuation"&(&/span&var&span class="token punctuation"&)&/span&&span class="token punctuation"&)&/span&
tf&span class="token punctuation"&.&/span&summary&span class="token punctuation"&.&/span&histogram&span class="token punctuation"&(&/span&&span class="token string"&'histogram'&/span&&span class="token punctuation"&,&/span& var&span class="token punctuation"&)&/span&
tf&span class="token punctuation"&.&/span&summary&span class="token punctuation"&.&/span&image&span class="token punctuation"&(&/span&&span class="token string"&'input'&/span&&span class="token punctuation"&,&/span& image_shaped_input&span class="token punctuation"&,&/span& &span class="token number"&10&/span&&span class="token punctuation"&)&/span&
&span class="token comment" spellcheck="true"&#定义合并变量操作,一次性生成所有摘要数据&/span&
merged &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&summary&span class="token punctuation"&.&/span&merge_all&span class="token punctuation"&(&/span&&span class="token punctuation"&)&/span&
&span class="token comment" spellcheck="true"&#定义写入摘要数据到事件日志的操作&/span&
train_writer &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&train&span class="token punctuation"&.&/span&SummaryWriter&span class="token punctuation"&(&/span&FLAGS&span class="token punctuation"&.&/span&log_dir &span class="token operator"&+&/span& &span class="token string"&'/train'&/span&&span class="token punctuation"&,&/span& sess&span class="token punctuation"&.&/span&graph&span class="token punctuation"&)&/span&
test_writer &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&train&span class="token punctuation"&.&/span&SummaryWriter&span class="token punctuation"&(&/span&FLAGS&span class="token punctuation"&.&/span&log_dir &span class="token operator"&+&/span& &span class="token string"&'/test'&/span&&span class="token punctuation"&)&/span&
&span class="token comment" spellcheck="true"&#执行训练操作,并把摘要信息写入到事件日志&/span&
summary&span class="token punctuation"&,&/span& _ &span class="token operator"&=&/span& sess&span class="token punctuation"&.&/span&run&span class="token punctuation"&(&/span&&span class="token punctuation"&[&/span&merged&span class="token punctuation"&,&/span& train_step&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& feed_dict&span class="token operator"&=&/span&feed_dict&span class="token punctuation"&(&/span&&span class="token boolean"&True&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&)&/span&
train_writer&span class="token punctuation"&.&/span&add_summary&span class="token punctuation"&(&/span&summary&span class="token punctuation"&,&/span& i&span class="token punctuation"&)&/span&
&span class="token comment" spellcheck="true"&#下载示例code,并执行模型训练&/span&
python mnist_with_summaries&span class="token punctuation"&.&/span&py
&span class="token comment" spellcheck="true"&#启动TensorBoard,TensorBoard的UI地址为http://ip_address:6006&/span&
tensorboard &span class="token operator"&-&/span&&span class="token operator"&-&/span&logdir&span class="token operator"&=&/span&&span class="token operator"&/&/span&path&span class="token operator"&/&/span&to&span class="token operator"&/&/span&log&span class="token operator"&-&/span&directory
1234567891011121314151617181920212223242526272829303132333435363738394041
&span class="token comment" spellcheck="true"&#定义变量及训练数据的摘要操作&/span&&tf&span class="token punctuation"&.&/span&summary&span class="token punctuation"&.&/span&scalar&span class="token punctuation"&(&/span&&span class="token string"&'max'&/span&&span class="token punctuation"&,&/span& tf&span class="token punctuation"&.&/span&reduce_max&span class="token punctuation"&(&/span&var&span class="token punctuation"&)&/span&&span class="token punctuation"&)&/span&&tf&span class="token punctuation"&.&/span&summary&span class="token punctuation"&.&/span&histogram&span class="token punctuation"&(&/span&&span class="token string"&'histogram'&/span&&span class="token punctuation"&,&/span& var&span class="token punctuation"&)&/span&&tf&span class="token punctuation"&.&/span&summary&span class="token punctuation"&.&/span&image&span class="token punctuation"&(&/span&&span class="token string"&'input'&/span&&span class="token punctuation"&,&/span& image_shaped_input&span class="token punctuation"&,&/span& &span class="token number"&10&/span&&span class="token punctuation"&)&/span&&&&&span class="token comment" spellcheck="true"&#定义合并变量操作,一次性生成所有摘要数据&/span&&merged &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&summary&span class="token punctuation"&.&/span&merge_all&span class="token punctuation"&(&/span&&span class="token punctuation"&)&/span&&&&&span class="token comment" spellcheck="true"&#定义写入摘要数据到事件日志的操作&/span&&train_writer &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&train&span class="token punctuation"&.&/span&SummaryWriter&span class="token punctuation"&(&/span&FLAGS&span class="token punctuation"&.&/span&log_dir &span class="token operator"&+&/span& &span class="token string"&'/train'&/span&&span class="token punctuation"&,&/span& sess&span class="token punctuation"&.&/span&graph&span class="token punctuation"&)&/span&&test_writer &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&train&span class="token punctuation"&.&/span&SummaryWriter&span class="token punctuation"&(&/span&FLAGS&span class="token punctuation"&.&/span&log_dir &span class="token operator"&+&/span& &span class="token string"&'/test'&/span&&span class="token punctuation"&)&/span&&&&&span class="token comment" spellcheck="true"&#执行训练操作,并把摘要信息写入到事件日志&/span&&summary&span class="token punctuation"&,&/span& _ &span class="token operator"&=&/span& sess&span class="token punctuation"&.&/span&run&span class="token punctuation"&(&/span&&span class="token punctuation"&[&/span&merged&span class="token punctuation"&,&/span& train_step&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& feed_dict&span class="token operator"&=&/span&feed_dict&span class="token punctuation"&(&/span&&span class="token boolean"&True&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&)&/span&&train_writer&span class="token punctuation"&.&/span&add_summary&span class="token punctuation"&(&/span&summary&span class="token punctuation"&,&/span& i&span class="token punctuation"&)&/span&&&&&&&span class="token comment" spellcheck="true"&#下载示例code,并执行模型训练&/span&&&&python mnist_with_summaries&span class="token punctuation"&.&/span&py&&&&span class="token comment" spellcheck="true"&#启动TensorBoard,TensorBoard的UI地址为http://ip_address:6006&/span&&tensorboard &span class="token operator"&-&/span&&span class="token operator"&-&/span&logdir&span class="token operator"&=&/span&&span class="token operator"&/&/span&path&span class="token operator"&/&/span&to&span class="token operator"&/&/span&log&span class="token operator"&-&/span&directory
TensorBoard Scalar图如下所示,其中横坐标表示模型训练的迭代次数,纵坐标表示该标量值,例如模型精确度,熵值等。TensorBoard支持这些统计值的下载。
TensorFlow Image摘要信息如下图所示,该示例中显示了测试数据和训练数据中的手写数字图片。
TensorFlow图模型如下图所示,可清晰地展示模型的训练流程,其中的每个方框表示变量所在的命名空间。包含的命名空间有input(输入数据),input_reshape(矩阵变换,用于图形化手写数字), layer1(隐含层1), layer2(隐含层2), dropout(丢弃一些神经元,防止过拟合), accuracy(模型精确度), cross_entropy(目标函数值,交叉熵), train(训练模型)。例如,input命名空间操作后的tensor数据会传递给input_reshape,train,accuracy,layer1,cross_entropy命名空间中的操作。
TensorFlow变量的概率分布如下图所示,其中横坐标为迭代次数,纵坐标为变量取值范围。图表中的线表示概率百分比,从高到底为[maximum, 93%, 84%, 69%, 50%, 31%, 16%, 7%, minimum]。例如,图表中从高到底的第二条线为93%,对应该迭代下有93%的变量权重值小于该线对应的目标值。
上述TensorFlow变量概率分布对应的直方图如下图所示:
TensorFlow GPU使用
GPU设备已经广泛地应用于图像分类,语音识别,自然语言处理,机器翻译等深度学习领域,并实现了开创性的性能改进。与单纯使用CPU相比,GPU 具有数以千计的计算核心,可实现 10-100 倍的性能提升。TensorFlow支持GPU运算的版本为tensorflow-gpu,并且需要先安装相关软件:GPU运算平台CUDA和用于深度神经网络运算的GPU加速库CuDNN。在TensorFlow中,CPU或GPU的表示方式如下所示:
“/cpu:0″:表示机器中第一个CPU。
“/gpu:0″:表示机器中第一个GPU卡。
“/gpu:1″:表示机器中第二个GPU卡。
TensorFlow中所有操作都有CPU和GPU运算的实现,默认情况下GPU运算的优先级比CPU高。如果TensorFlow操作没有指定在哪个设备上进行运算,默认会优选采用GPU进行运算。下面介绍如何在TensorFlow使用GPU:
&span class="token comment" spellcheck="true"&# 定义使用gpu0执行a*b的矩阵运算,其中a,b,c都在gpu0上执行&/span&
&span class="token keyword"&with&/span& tf&span class="token punctuation"&.&/span&device&span class="token punctuation"&(&/span&&span class="token string"&'/gpu:0'&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span&
a &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&constant&span class="token punctuation"&(&/span&&span class="token punctuation"&[&/span&&span class="token number"&1.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&2.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&3.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&4.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&5.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&6.0&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& shape&span class="token operator"&=&/span&&span class="token punctuation"&[&/span&&span class="token number"&2&/span&&span class="token punctuation"&,&/span& &span class="token number"&3&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& name&span class="token operator"&=&/span&&span class="token string"&'a'&/span&&span class="token punctuation"&)&/span&
b &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&constant&span class="token punctuation"&(&/span&&span class="token punctuation"&[&/span&&span class="token number"&1.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&2.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&3.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&4.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&5.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&6.0&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& shape&span class="token operator"&=&/span&&span class="token punctuation"&[&/span&&span class="token number"&3&/span&&span class="token punctuation"&,&/span& &span class="token number"&2&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& name&span class="token operator"&=&/span&&span class="token string"&'b'&/span&&span class="token punctuation"&)&/span&
c &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&matmul&span class="token punctuation"&(&/span&a&span class="token punctuation"&,&/span& b&span class="token punctuation"&)&/span&
&span class="token comment" spellcheck="true"&# 通过log_device_placement指定在日志中输出变量和操作所在的设备&/span&
sess &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&Session&span class="token punctuation"&(&/span&config&span class="token operator"&=&/span&tf&span class="token punctuation"&.&/span&ConfigProto&span class="token punctuation"&(&/span&log_device_placement&span class="token operator"&=&/span&&span class="token boolean"&True&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&)&/span&
&span class="token keyword"&print&/span& sess&span class="token punctuation"&.&/span&run&span class="token punctuation"&(&/span&c&span class="token punctuation"&)&/span&
1234567891011121314151617
&span class="token comment" spellcheck="true"&# 定义使用gpu0执行a*b的矩阵运算,其中a,b,c都在gpu0上执行&/span&&&span class="token keyword"&with&/span& tf&span class="token punctuation"&.&/span&device&span class="token punctuation"&(&/span&&span class="token string"&'/gpu:0'&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span&&&&a &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&constant&span class="token punctuation"&(&/span&&span class="token punctuation"&[&/span&&span class="token number"&1.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&2.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&3.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&4.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&5.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&6.0&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& shape&span class="token operator"&=&/span&&span class="token punctuation"&[&/span&&span class="token number"&2&/span&&span class="token punctuation"&,&/span& &span class="token number"&3&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& name&span class="token operator"&=&/span&&span class="token string"&'a'&/span&&span class="token punctuation"&)&/span&&&&b &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&constant&span class="token punctuation"&(&/span&&span class="token punctuation"&[&/span&&span class="token number"&1.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&2.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&3.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&4.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&5.0&/span&&span class="token punctuation"&,&/span& &span class="token number"&6.0&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& shape&span class="token operator"&=&/span&&span class="token punctuation"&[&/span&&span class="token number"&3&/span&&span class="token punctuation"&,&/span& &span class="token number"&2&/span&&span class="token punctuation"&]&/span&&span class="token punctuation"&,&/span& name&span class="token operator"&=&/span&&span class="token string"&'b'&/span&&span class="token punctuation"&)&/span&&&&c &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&matmul&span class="token punctuation"&(&/span&a&span class="token punctuation"&,&/span& b&span class="token punctuation"&)&/span&&&&&span class="token comment" spellcheck="true"&# 通过log_device_placement指定在日志中输出变量和操作所在的设备&/span&&sess &span class="token operator"&=&/span& tf&span class="token punctuation"&.&/span&Session&span class="token punctuation"&(&/span&config&span class="token operator"&=&/span&tf&span class="token punctuation"&.&/span&ConfigProto&span class="token punctuation"&(&/span&log_device_placement&span class="token operator"&=&/span&&span class="token boolean"&True&/span&&span class="token punctuation"&)&/span&&span class="token punctuation"&)&/span&&&span class="token keyword"&print&/span& sess&span class="token punctuation"&.&/span&run&span class="token punctuation"&(&/span&c&span class="token punctuation"&)&/span&
本实验环境下只有一个GPU卡,设备的Device Mapping及变量操作所在设备位置如下:
&span class="token comment" spellcheck="true"&#设备的Device Mapping&/span&
&span class="token operator"&/&/span&job&span class="token punctuation"&:&/span&localhost&span class="token operator"&/&/span&replica&span class="token punctuation"&:&/span&&span class="token number"&0&/span&&span class="token operator"&/&/span&task&span class="token punctuation"&:&/span&&span class="token number"&0&/span&&span class="token operator"&/&/span&gpu&span class="token punctuation"&:&/span&&span class="token number"&0&/span& &span class="token operator"&-&/span&&span class="token operator"&&&/span& device&span class="token punctuation"&:&/span& &span class="token number"&0&/span&&span class="token punctuation"&,&/span& name&span class="token punctuation"&:&/span& Tesla K20c&span class="token punctuation"&,&/span& pci bus id&span class="token punctuation"&:&/span& &span class="token number"&0000&/span&&span class="token punctuation"&:&/span&&span class="token number"&81&/span&&span class="token punctuation"&:&/span&&span class="token number"&00.0&/span&
&span class="token comment" spellcheck="true"&#变量操作所在设备位置&/span&
a&span class="token punctuation"&:&/span& &span class="token punctuation"&(&/span&Const&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span& &span class="token operator"&/&/span&job&span class="token punctuation"&:&/span&localhost&span class="token operator"&/&/span&replica&span class="token punctuation"&:&/span&&span class="token number"&0&/span&&span class="token operator"&/&/span&task&span class="token punctuation"&:&/span&&span class="token number"&0&/span&&span class="token operator"&/&/span&gpu&span class="token punctuation"&:&/span&&span class="token number"&0&/span&
b&span class="token punctuation"&:&/span& &span class="token punctuation"&(&/span&Const&span class="token punctuation"&)&/span&&span class="token punctuation"&:&/span& &span class="token operator"&/&/span&job&span class="token punctuation"&:&/span&localhost&span class="token operator"&/&/

我要回帖

更多关于 tensorflow cpu gpu 的文章

 

随机推荐