AMD CPU的PCIe AMDCPU默认电压揭秘Payload是多少

32核+128通道PCIe,AMD推服务器处理器EPYC
作者:云中子
分类 : 比特网
  本周,正式推出了名为EPYC的高性能。凭借其单上的高核心数量、高内存带宽和对高速输入/输出通道的支持,EPYC能够彻底改革双插槽服务器市场,并可能同时重塑对单插槽服务器的预期。新一代高性能EPYC系列产品此前代号为“那不勒斯(Naples)”,可为基于的和传统本地部署的提供最高可达32颗物理内核的“Zen”处理引擎,Zen已经取得了很大的成功。第一款基于EPYC的服务器将于6月份推出,并将得到原始设备制造商(OEM)和渠道合作伙伴的广泛支持。
  AMD公司高级副总裁兼、嵌入式和半定制产品 (EESC)事业部总经理 Forrest Norrod 表示:“凭借EPYC新处理器,AMD在征途中迈向新的征程。AMD EPYC处理器将为双插槽服务器的性能和稳定性设定新的标准,并如我们今天所演示的那样,凭借业界首款无妥协的单插槽解决带来更多机会。我们相信,这一全新产品阵容可以凭借其独特的性能、设计灵活性和颠覆性的总体拥有成本(TCO)组合,重塑数据中心市场的相当部分。”
  EPYC硬件方面,采用高度可扩展的32核单芯片(SoC)设计,每个内核支持两个高性能线程;每个EPYC设备3具有8通道的内存。在双插槽服务器中,可在16个内存通道上支持多达32个DIMMs的DDR4,提供高达4TB的总内存容量。
  完整的SoC,完全集成的高速I/O支持128通道PCIe,无需单独的芯片组;高度优化的缓存结构,用于高性能、高能效的计算;双插槽系统中用于连接两个EPYC CPU的Infinity Fabric一致性互联以及专用硬件。
  IDC高级副总裁Matthew Eastwood表示:“目前,单插槽配置中没有功能齐全的高性能服务器处理器。今天市售的单插槽服务器产品会迫使买家为了获得可以支持多核心计算性能的内存带宽和I/O,而购买更昂贵的双插槽服务器。EPYC改变了这种状态,通过单处理器解决方案为当今的工作负载提供合适数量的高性能内核内存和I/O。
* 本文为ChinaByte比特网原创内容,版权所有,转载请注明出处和原文链接,未经授权请勿用于商业用途。
[ 责任编辑:吴丛丛 ]
比特网 11:09:54
从《中国互联网+指数报告(2018)》看数字经济
“互联网+”的这些新变化,你知道吗?
软件信息化周刊
比特软件信息化周刊提供以数据库、操作系统和管理软件为重点的全面软件信息化产业热点、应用方案推荐、实用技巧分享等。以最新的软件资讯,最新的软件技巧,最新的软件与服务业内动态来为IT用户找到软捷径。
商务办公周刊
比特商务周刊是一个及行业资讯、深度分析、企业导购等为一体的综合性周刊。其中,与中国计量科学研究院合力打造的比特实验室可以为商业用户提供最权威的采购指南。是企业用户不可缺少的智选周刊!
比特网络周刊向企业网管员以及网络技术和产品使用者提供关于网络产业动态、技术热点、组网、建网、网络管理、网络运维等最新技术和实用技巧,帮助网管答疑解惑,成为网管好帮手。
服务器周刊
比特服务器周刊作为比特网的重点频道之一,主要关注x86服务器,RISC架构服务器以及高性能计算机行业的产品及发展动态。通过最独到的编辑观点和业界动态分析,让您第一时间了解服务器行业的趋势。
比特存储周刊长期以来,为读者提供企业存储领域高质量的原创内容,及时、全面的资讯、技术、方案以及案例文章,力求成为业界领先的存储媒体。比特存储周刊始终致力于用户的企业信息化建设、存储业务、数据保护与容灾构建以及数据管理部署等方面服务。
比特安全周刊通过专业的信息安全内容建设,为企业级用户打造最具商业价值的信息沟通平台,并为安全厂商提供多层面、多维度的媒体宣传手段。与其他同类网站信息安全内容相比,比特安全周刊运作模式更加独立,对信息安全界的动态新闻更新更快。
新闻中心热点推荐
新闻中心以独特视角精选一周内最具影响力的行业重大事件或圈内精彩故事,为企业级用户打造重点突出,可读性强,商业价值高的信息共享平台;同时为互联网、IT业界及通信厂商提供一条精准快捷,渗透力强,覆盖面广的媒体传播途径。
云计算周刊
比特云计算周刊关注云计算产业热点技术应用与趋势发展,全方位报道云计算领域最新动态。为用户与企业架设起沟通交流平台。包括IaaS、PaaS、SaaS各种不同的服务类型以及相关的安全与管理内容介绍。
CIO俱乐部周刊
比特CIO俱乐部周刊以大量高端CIO沙龙或专题研讨会以及对明星CIO的深入采访为依托,汇聚中国500强CIO的集体智慧。旨为中国杰出的CIO提供一个良好的互融互通 、促进交流的平台,并持续提供丰富的资讯和服务,探讨信息化建设,推动中国信息化发展引领CIO未来职业发展。
IT专家新闻邮件长期以来,以定向、分众、整合的商业模式,为企业IT专业人士以及IT系统采购决策者提供高质量的原创内容,包括IT新闻、评论、专家答疑、技巧和白皮书。此外,IT专家网还为读者提供包括咨询、社区、论坛、线下会议、读者沙龙等多种服务。
X周刊是一份IT人的技术娱乐周刊,给用户实时传递I最新T资讯、IT段子、技术技巧、畅销书籍,同时用户还能参与我们推荐的互动游戏,给广大的IT技术人士忙碌工作之余带来轻松休闲一刻。AMD的CPU和Intel的CPU上面的针数有差别吗
按时间排序
AMD只能用AMD的主板 Inter也只能用inter的主板。
CPU性能不仅限于主频大小 I3(主频2.13GHz)虽然主频略低于6600(主频2.2GHz) 但其拥有超线程技术 在多任务处理方面远非6600能比 经测试 i3性能至少相当于P8700(主频2.53GHZ) 手机打这么多 分给我吧
目前AMD的单核性能差,相当于同级别英特尔CPU的60%单核性能,即便是有个好显卡,所以玩老游戏FPS会不理想。但是其价格便宜,FX系列的8核心可以弥补单核性薄弱的缺点,以FX的8核心打I5的4核心。AMD的CPU目前没有高端的,最高端的FX-9590也是FX-8350超频版。也就是体质好很多的FX经过AMD官方超频的东西。堆积高频得来的性能。但是高频也会导致温度高,功耗高(耗电)得缺点。所以目前AMD的高端系列就是个笑话。而服务器市场也被英特尔垄断,市场份额极小。AMD的低端系列,就是速龙系列,2~4个核心,普遍4个核心。全新的盒装在300~600元价位。代表型号是860K,870K,880K。依次对位的是英特尔的I3-3220,I3-4160,I3-4170这三个型号。注意一点,AMD的这三个型号的CPU都是真实的4核心CPU,而I3是双核CPU,不过其单核性能强悍,有多线程技术的加持。况且AMD单核性能薄弱,这就表现出英特尔的双核四线程CPU居然和AMD的真四核四线程CPU性能一样。AMD的中端CPU,代表的是FX系列,FX-6300,,FX-6320,FX-6350,FX-8300,FX-8320,FX-8350,和最近出的FX-8370,其性能依次对位的是英特尔的I3-4170,I3-6100,二代I5,三代I5,i5-4590和I5-4690.
AMD的中端CPU其中FX-8300和FX-8320最有性价比,为什么呢。因为FX-8300某宝散片才650元。搭配好的风冷散热器,也才800元整。然后搭配个稍微好点的主板,把CPU频率从3.3Ghz超到4.2Ghz以后,其FX-8300的性能基本和I5-4690一样. 一个CPU加好点的CPU风扇进过超频竟然能到1100元价位的I5-4690一样。这是多么有性价比啊,CPU散热器还用上了好的了,温度不会那么高了。而搭配I5-4690的组合,价位在散片和风扇1200元整.整整便宜了400元。省出了400元就可以加在显卡上,或多买个固态硬盘。AMD还有一个系列的CPU,就是APU,APU自带高性能的核心显卡。型号分为A4,A6,A8和A10。A4和A6前缀的CPU都是双核APU,早已经停产了,搭配的也是最早的核心显卡。当时核心显卡的性能还很低,A4和A6多见于联想的台式机。现在也早已停产。而A8和A10才是现在APU的主流,其中最有性价比的是A8-7650K,盒装正品才500几十元。APU就是CPU加核心显卡,可以不用买显卡就能玩LOL这种对显卡要求不高的游戏。搭配双通道内存,核心显卡性能提升50%以上。以LOL为例,搭配双通道1600Mhz内存,开全高特效画质能以60FPS流畅的运行。AMD的APU就时候玩LOL和办公。不用买显卡的。还很省电。-----------------------------------------------------------------------------[英特尔CPU]目前低端系列是奔腾双核系列和I3系列,对位的是AMD的速龙系列,和APU。奔腾双核都是双核CPU,不具有超线程技术,和I3还是没法比的。CPU性能比速龙弱,但是奔腾双核搭配核心显卡。这系列CPU多见于低端的办公机。因为便宜,而且带核心显卡,核心显卡性能虽然很弱。比不上APU的核心显卡。但是低端办公机为了省钱就不需要配独立显卡。I3由于具有超线程技术,再加上本来单核性能就强悍。所以比奔腾双核强不少。而且自带的核心显卡比奔腾双核的核心显卡强些。但是任然比不过APU的核心显卡。I3也时候办公,更多的是用于低端机。中端CPU代表是I5。I5对位的是AMD的FX系列八核CPU。I5是目前的主流CPU。使用最广泛的CPU。I5也自带核心显卡,也是不如APU核心显卡强。高端系列是I7,四核八线程的CPU。I7没有对手,除了AMD的核弹FX-9590外。但是最新的I7-6700K早已秒杀AMD的高端CPU。I7是桌面上最常见的最强系列CPU。搭配I7的主机都是5000或以上价位的。只是单主机5000。也有不带显卡的I7主机,这样的主机因为不带显卡,所以便宜,3500以上即可配到。但是不适合玩游戏,玩游戏也就能玩玩LOL。以全高特效画质流畅运行80FPS.更多的是用来普通家用。搭配I7的主流还是高端主机。就是为玩游戏而生的。还有服务器的CPU,常见的有E3系列的服务器民用版CPU,用来玩游戏比I5强些,散片价格便宜。盒装正品就是服务器CPU该有的价格,盒装正品贵。以前的E3-1231 V3散片CPU售价1300元,而盒装正品售价高达1800元。E3-1231 V3号称I5的价格I7的性能,一直被机友追捧。但是其真实性能和I7-4790还是有些差距的,差距就在频率上。更比不过I7-4790K,所以要理性看待E3。E3系列并不那么神!!!----------------------------------------------------------------最后总结一下:有多少钱买多少钱的东西,不管是英特尔还是AMD。没有什么CPU适合玩游戏,什么CPU适合办公这样的否决式说法。别被其他人带节奏,多分析自己配主机主要干什么,次要干什么。预算多少钱,等等。权衡利弊,尽量找性价比高的产品。主机有好几样配件,一件便宜50元,就多省最少300元。就又能配一样东西(当然是要在产品质量没问题的情况下)
I3的主频高一些!512显卡玩一般游戏和1G显卡没区别,但是要求配置很高的游戏,区别还是比较明显的,因为大游戏贴图和多边形复杂,要求内存容量大
主频一样,但是速度的差别在缓存上
1.x4 740不带集显,整套配置必须要加上显卡 ,而i3 3220带集显,整套配置可以不需要配独立显卡。视乎于游戏需要决定。2.i3 3220是双核四线程,而x4 740 是四核四线程 ,整体的性能上,i3 3220 无疑强于 x4 740,即使是单核的性能也比双核强,对于对频率比较敏感的程序来讲,i3 3220 更优先选择,如果对多核优化得比较好的程序来讲, 根据价位可以选择 x4 740 3. 以功耗来计算,i3 3220 是 TDP 55w , 而 X4 740 是 TDP 65W
小鸡正解!
这两款处理器是不同架构不同工艺的,无法直接进行比较。 虽然I3 3220是双核心处理器,而X4 740是四核心处理器,但I3 3220的单核运算能力要远远强于X4 740的单核运算能力,因此综合性能上I3 3220要完胜X4 740。 不过X4 740的性价比也不错,属于低端主流装机的首选之一,应付当下的大型网络游戏基本没问题,大型单机在中画质下也基本可以跑起来。 如何选择主要看楼主的应用需求和预算,实际上个人觉得I3 3220和FX-6300来讲,排除I3仅为55瓦低功耗的优势不讲,综合性能FX-6300表现更为出色。 如果仅仅是搭配HD7770显卡,按照你之前的思路和要求,选择X4 740处理器就足够了,并且功耗也不高,仅为65瓦。
感谢您为社区的和谐贡献力量请选择举报类型
经过核实后将会做出处理感谢您为社区和谐做出贡献
确定要取消此次报名,退出该活动?
请输入私信内容:随笔 - 53&
文章 - 104&评论 - 2&trackbacks - 0
http://www.hudong.com/wiki/PCI-E
基于高速序列构架产生了很多传输标准。包括HyperTransport,InfiniBand,RapidIO和StarFabric等等。这些均有业 界的不同企业支持,背后也都有大量的资金投入标准的研究开发,所以每一标准都声称自己与众不同,独占优势。主要的差异在于可扩展性、灵活性与反应时间、单 位成本的取舍平衡各不相同。其中的一个例子是在传输包上增加一个复杂的头信息以支持复杂路由传输(PCI Express不支持这种方式)。这样的信息增加降低了接口的有效带宽也使传输更复杂,但是相应创造了新的软件支持此功能。这种架构下需要软件追踪拓 扑结构的变化以实现系统支持热插拔。InfiniBand 和 StarFabric 标准即能实现这以功能。另一个例子是缩小信息包以减少反应时间。较小的信息包意味着包头占用了包的更大百分比,这样又降低了有效带宽。能实现此功能的标准 是RapidIO 和HyperTransport。PCI Express取中庸之道,定位于设计成一种系统互连接口(总线)而非一种设备接口或路由网络协议。另外为了针对软件透明,它的设计目标限制了它作为协 议,也在某种程度上增加了它的反应时间
============================
5.1&TLP的格式
( 16:21:32)
当处理器或者其他PCIe设备访问PCIe设备时,所传送的数据报文首先通过事务层被封装为一个或者多个TLP,之后才能通过PCIe总线的各个层次发送出去。TLP的基本格式如图5-1所示。
一个完整的TLP由1个或者多个TLP Prefix、TLP头、Data Payload(数据有效负载)和TLP Digest组成。TLP头是TLP最重要的标志,不同的TLP其头的定义并不相同。TLP头包含了当前TLP的总线事务类型、路由信息等一系列信息。在一个TLP中,Data Payload的长度可变,最小为0,最大为1024DW。
TLP Digest是一个可选项, 一个TLP是否需要TLP Digest由TLP头决定。Data Payload也是一个可选项,有些TLP并不需要Data Payload,如存储器读请求、配置和I/O写完成TLP并不需要Data Payload。
TLP Prefix由PCIe V2.1总线规范引入,分为Local TLP Prefix和EP-EP TLP Prefix两类。其中Local TLP Prefix的主要作用是在PCIe链路的两端传递消息,而EP-EP TLP Prefix的主要作用是在发送设备和接收设备之间传递消息。设置TLP Prefix的主要目的是为了扩展TLP头,并以此支持PCIe V2.1规范的一些新的功能。
TLP头由3个或者4个双字(DW)组成。其中第一个双字中保存通用TLP头,其他字段与通用TLP头的Type字段相关。一个通用TLP头由Fmt、Type、TC、Length等字段组成,如图5-2所示。
如果存储器读写TLP支持64位地址模式时,TLP头的长度为4DW,否则为3DW。而完成报文的TLP头不含有地址信息,使用的TLP头长度为3DW。其中Byte 4~Byte 15的格式与TLP相关,下文将结合具体的TLP介绍这些字段。
5.1.1 通用TLP头的Fmt字段和Type字段
Fmt和Type字段确认当前TLP使用的总线事务,TLP头的大小是由3个双字还是4个双字组成,当前TLP是否包含有效负载。其具体含义如表5-1所示。
&表5-1 Fmt[1:0]字段
TLP大小为3个双字,不带数据。
TLP大小为4个双字,不带数据。
TLP大小为3个双字,带数据。
TLP大小为4个双字,带数据。
TLP Prefix
PCIe总线保留
其中所有读请求TLP都不带数据,而写请求TLP带数据,而其他TLP可能带数据也可能不带数据,如完成报文可能含有数据,也可能仅含有完成标志而并不携带数据。在TLP的Type字段中存放TLP的类型,即PCIe总线支持的总线事务。该字段共由5位组成,其含义如表5-2所示。
表5-2 Type[4:0]字段
存储器读请求;TLP头大小为3个或者4个双字,不带数据。
带锁的存储器读请求;TLP头大小为3个或者4个双字,不带数据。
存储器写请求;TLP头大小为3个或者4个双字,带数据。
IO读请求;TLP头大小为3个双字,不带数据。
IO写请求;TLP头大小为3个双字,带数据。
配置0读请求;TLP头大小为3个双字,不带数据。
配置0写请求;TLP头大小为3个双字,带数据。
配置1读请求;不带数据。
配置1写请求;带数据。
本书对这两种总线事务不做介绍。
0b1 0r2r1r0
消息请求;TLP头大小为4个双字,不带数据。&rrr&字段是消息请求报文的Route字段,下文将详细介绍该字段。
0b1 0r2r1r0
消息请求;TLP头大小为4个双字,带数据。
完成报文;TLP头大小为3个双字,不带数据。包括存储器、配置和I/O写完成。
带数据的完成报文,TLP头大小为3个双字,包括存储器读、I/O读、配置读和原子操作读完成。
锁定的完成报文,TLP头大小为3个双字,不带数据。
带数据的锁定完成报文,TLP头大小为3个双字,带数据。
Fetch and Add原子操作。
Swap原子操作。
CAS原子操作。
0b0 L3L2L1L0
Local TLP Prefix
0b1 E3E2E1E0
End-End TLP Prefix
由上表所示,存储器读和写请求,IO读和写请求,及配置读和写请求的type字段相同,如存储器读和写请求的Type字段都为0b0 0000。此时PCIe总线规范使用Fmt字段区分读写请求,当Fmt字段是&带数据&的报文,一定是&写报文&;当Fmt字段是&不带数据&的报文,一定是&读报文&。
PCIe总线的数据报文传送方式与PCI总线数据传送有类似之处。其中存储器写TLP使用Posted方式进行传送,而其他总线事务使用Non-Posted方式。
PCIe总线规定所有Non-Posted存储器请求使用Split总线方式进行数据传递。当PCIe设备进行存储器读、I/O读写或者配置读写请求时,首先向目标设备发送数据读写请求TLP,当目标设备收到这些读写请求TLP后,将数据和完成信息通过完成报文(Cpl或者CplD)发送给源设备。
其中存储器读、I/O读和配置读需要使用CplD报文,因为目标设备需要将数据传递给源设备;而I/O写和配置写需要使用Cpl报文,因为目标设备不需要将任何数据传递给源设备,但是需要通知源设备,写操作已经完成,数据已经成功地传递给目标设备。
在PCIe总线中,进行存储器或者I/O写操作时,数据与数据包头一起传递;而进行存储器或者I/O读操作时,源设备首先向目标设备发送读请求TLP,而目标设备在准备好数据后,向源设备发出完成报文。
PCIe总线规范还定义了MRdLk报文,该报文的主要作用是与PCI总线的锁操作相兼容,但是PCIe总线规范并不建议用户使用这种功能,因为使用这种功能将极大影响PCIe总线的数据传送效率。
与PCI总线并不相同,PCIe总线规范定义了Msg报文,即消息报文。分别为Msg和MsgD,这两种报文的区别在于一个报文可以传递数据,一个不能传递数据。
PCIe V2.1总线规范还补充了一些总线事务,如FetchAdd、Swap、CAS、LPrfx和EPrfx。其中LPrfx和EPrfx总线事务分别与Local TLP Prefix和EP-EP TLP Prefix对应。在PCIe总线规范V2.0中,TLP头的大小为1DW,而使用LPrfx和EPrfx总线事务可以对TLP头进行扩展,本节不对这些TLP Prefix做进一步介绍。PCIe设备可以使用FetchAdd、Swap和CAS总线事务进行原子操作,本篇将在第5.3.5节详细介绍该类总线事务。
5.1.2 TC字段
TC字段表示当前TLP的传送类型,PCIe总线规定了8种传输类型,分别为TC0~TC7,缺省值为TC0,该字段与PCIe的QoS相关。PCIe设备使用TC区分不同类型的数据传递,而多数EP中只含有一个VC,因此这些EP在发送TLP时,也仅仅使用TC0,但是有些对实时性要求较高的EP中,含有可以设置TC字段的寄存器。
在Intel的高精度声卡控制器(High Definition Audio Controller)的扩展配置空间中含有一个TCSEL寄存器。系统软件可以设置该寄存器,使声卡控制器发出的TLP使用合适的TC。声卡控制器可以使用TC7传送一些对实时性要求较强的控制信息,而使用TC0传送一般的数据信息。在具体实现中,一个EP也可以将控制TC字段的寄存器放入到设备的BAR空间中,而不必和Intel的高精度声卡控制器相同,存放在PCI配置空间中。
目前许多处理器系统的RC仅支持一个VC通路,此时EP使用不同的TC进行传递数据的意义不大。x86处理器的MCH中一般支持两个VC通路,而多数PowerPC处理器仅支持一个VC通路。PLX公司的多数Switch也仅支持两个VC通路。
有些RC,如MPC8572处理器,也能决定其发出TLP使用的TC。在该处理器的PCIe Outbound窗口寄存器(PEXOWARn)中,含有一个TC字段,通过设置该字段可以确定RC发出的TLP使用的TC字段。不同的TC可以使用PCIe链路中的不同VC,而不同的VC的仲裁级别并不相同。EP或者RC通过调整其发出TLP的TC字段,可以调整TLP使用的VC,从而调整TLP的优先级。
5.1.3 Attr字段
Attr字段由3位组成,其中第2位表示该TLP是否支持PCIe总线的ID-based Ordering;第1位表示是否支持Relaxed Ordering;而第0位表示该TLP在经过RC到达存储器时,是否需要进行Cache共享一致性处理。Attr字段如图5-3所示。
一个TLP可以同时支持ID-based Ordering和Relaxed Ordering两种位序。Relaxed Ordering最早在PCI-X总线规范中提出,用来提高PCI-X总线的数据传送效率;而ID-based Ordering由PCIe V2.1总线规范提出。TLP支持的序如表5-3所示。
&表5-3 TLP支持的序
缺省序,即强序模型
PCI-X Relaxed Ordering模型
ID-Based Ordering(IDO)模型
同时支持Relaxed Ordering和IDO模型
当使用标准的强序模型时,在数据的整个传送路径中,PCIe设备在处理相同类型的TLP时,如PCIe设备发送两个存储器写TLP时,后面的存储器写TLP必须等待前一个存储器写TLP完成后才能被处理,即便当前报文在传送过程中被阻塞,后一个报文也必须等待。
如果使用Relaxed Ordering模型,后一个存储器写TLP可以穿越前一个存储器写TLP,提前执行,从而提高了PCIe总线的利用率。有时一个PCIe设备发出的TLP,其目的地址并不相同,可能先进入发送队列的TLP,在某种情况下无法发送,但这并不影响后续TLP的发送,因为这两个TLP的目的地址并不相同,发送条件也并不相同。
值得注意的是,在使用PCI总线强序模型时,不同种类的TLP间也可以乱序通过同一条PCIe链路,比如存储器写TLP可以超越存储器读请求TLP提前进行。而PCIe总线支持Relaxed Ordering模型之后,在TLP的传递过程中出现乱序种类更多,但是这些乱序仍然是有条件限制的。在PCIe总线规范中为了避免死锁,还规定了不同报文的传送数据规则,即Ordering Rules。
PCIe V2.1总线规范引入了一种新的&序&模型,即IDO(ID-Based Ordering)模型,IDO模型与数据传送的数据流相关,是PCIe V2.1规范引入的序模型。
Attr字段的第0位是&No Snoop Attribute&位。当该位为0时表示当前TLP所传送的数据在通过FSB时,需要与Cache保持一致,这种一致性由FSB通过总线监听自动完成而不需要软件干预;如果为1,表示FSB并不会将TLP中的数据与Cache进行一致,在这种情况下,进行数据传送时,必须使用软件保证Cache的一致性。
在PCI总线中没有与这个&No Snoop Attribute&位对应的概念,因此一个PCI设备对存储器进行DMA操作时会进行Cache一致性操作。这种&自动的&Cache一致性行为在某些特殊情况下并不能带来更高的效率。
当一个PCIe设备对存储器进行DMA读操作时,如果传送的数据非常大,比如512MB,Cache的一致性操作不但不会提高DMA写的效率,反而会降低。因为这个DMA读访问的数据在绝大多数情况下,并不会在Cache中命中,但是FSB依然需要使用Snoop Phase进行总线监听。而处理器在进行Cache一致性操作时仍然需要占用一定的时钟周期,即在Snoop Phase中占用的时钟周期,Snoop Phase是FSB总线事务的一个阶段,如图3-6所示。
对于这类情况,一个较好的做法是,首先使用软件指令保证Cache与主存储器的一致性,并置&No Snoop Attribute&位为1,然后再进行DMA读操作。同理使用这种方法对一段较大的数据区域进行DMA写时,也可以提高效率。
除此之外,当PCIe设备访问的存储器,不是&可Cache空间&时,也可以通过设置&No Snoop Attribute&位,避免FSB的Cache共享一致性操作,从而提高FSB的效率。&No Snoop Attribute&位是PCIe总线针对PCI总线的不足,所作出的重要改动。
5.1.4 通用TLP头中的其他字段
除了Fmt和Type字段外,通用TLP头还含有以下字段。
1 TH位、TD位和EP位
TH位为1表示当前TLP中含有TPH(TLP Processing Hint)信息,TPH是PCIe V2.1总线规范引入的一个重要功能。TLP的发送端可以使用TPH信息,通知接收端即将访问数据的特性,以便接收端合理地预读和管理数据,TPH的详细介绍见第5.3.6节。
TD位表示TLP中的TLP Digest是否有效,为1表示有效,为0表示无效。而EP位表示当前TLP中的数据是否有效,为1表示无效,为0表示有效。
AT字段与PCIe总线的地址转换相关。在一些PCIe设备中设置了ATC(Address Translation Cache)部件,这个部件的主要功能是进行地址转换。只有在支持IOMMU技术的处理器系统中,PCIe设备才能使用该字段。
AT字段可以用作存储器域与PCI总线域之间的地址转换,但是设置这个字段的主要目的是为了方便多个虚拟主机共享同一个PCIe设备。对这个字段有兴趣的读者可以参考Address Translation Sevices规范,这个规范是PCI的IO Virtualization规范的重要组成部分。对虚拟化技术有兴趣的读者可以参考清华大学出版社的《系统虚拟化&&原理与实现》,以获得基本的关于虚拟化的入门知识。
3 Length字段
Length字段用来描述TLP的有效负载(Data Payload)大小。PCIe总线规范规定一个TLP的Data Payload的大小在1B~4096B之间。PCIe总线设置Length字段的目的是提高总线的传送效率。
当PCI设备在进行数据传送时,其目标设备并不知道实际的数据传送大小,这在一定程度上影响了PCI总线的数据传送效率。而在PCIe总线中,目标设备可以通过Length字段提前获知源设备需要发送或者请求的数据长度,从而合理地管理接收缓冲,并根据实际情况进行Cache一致性操作。
当PCI设备进行DMA写操作,将PCI设备中4KB大小的数据传送到主存储器时,这个PCI设备的DMA控制器将存放传送的目的地址和传送大小,然后启动DMA写操作,将数据写入到主存储器。由于PCI总线是一条共享总线,因此传送4KB大小的数据,可能会使用若干个PCI总线写事务才能完成,而每一个PCI总线写事务都不知道DMA控制器何时才能将数据传送完毕。
如果这些总线写事务还通过一系列PCI桥才能到达存储器,在这个路径上的每一个PCI桥也无法预知,何时这个DMA操作才能结束。这种&不可预知&将导致PCI总线的带宽不能被充分利用,而且极易造成PCI桥数据缓冲的浪费。
而PCIe总线通过TLP的Length字段,可以有效避免PCIe链路带宽的浪费。值得注意的是,Length字段以DW为单位,其最小单位为1个DW。如果PCIe主设备传送的单位小于1个DW或者传送的数据并不以DW对界时,需要使用字节使能字段,即&DW BE&字段。有关&DW BE&字段的详细说明见第5.3.1节。
PowerPC处理器通过设置Inbound寄存器,也可以避免这个Cache一致性操作。
FSB收到这类TLP后,不进行Cache一致性操作。
存储器读请求TLP没有DataPayload字段,此时该TLP使用Length字段表示需要读取多少数据。
当多个PCI设备共享一条PCI总线时,一个设备不会长时间占用PCI总线,这个设备在使用这条PCI总线一定的时间后,将让出PCI总线的使用权。
5.4&TLP中与数据负载相关的参数
( 11:36:53)
在PCIe总线中,有些TLP含有Data Payload,如存储器写请求、存储器读完成TLP等。在PCIe总线中,TLP含有的Data Payload大小与Max_Payload_Size、Max_Read_Request_Size和RCB参数相关。下文将分别介绍这些参数的使用。
5.4.1 Max_Payload_Size参数
PCIe总线规定在TLP报文中,数据有效负载的最大值为4KB,但是PCIe设备并不一定能够发送这么大的数据报文。PCIe设备含有&Max_Payload_Size&和&Max_Payload_Size Supported&参数,这两个参数分别在Device Capability寄存器和Device Control寄存器中定义。
&Max_Payload_Size Supported&参数存放在一个PCIe设备中,TLP有效负载的最大值,该参数由PCIe设备的硬件逻辑确定,系统软件不能改写该参数。而Max_Payload_Size参数存放PCIe设备实际使用的,TLP有效负载的最大值。该参数由PCIe链路两端的设备协商决定,是PCIe设备进行数据传送时,实际使用的参数。
PCIe设备发送数据报文时,使用Max_Payload_Size参数决定TLP的最大有效负载。当PCIe设备的所传送的数据大小超过Max_Payload_Size参数时,这段数据将被分割为多个TLP进行发送。当PCIe设备接收TLP时,该TLP的最大有效负载也不能超过Max_Payload_Size参数,如果接收的TLP,其Length字段超过Max_Payload_Size参数,该PCIe设备将认为该TLP非法。
RC或者EP在发送存储器读完成TLP时,这个存储器读完成TLP的最大Payload也不能超过Max_Payload_Size参数,如果超过该参数,PCIe设备需要发送多个读完成报文。值得注意的是,这些读完成报文需要满足RCB参数的要求,有关RCB参数的详细说明见下文。
在实际应用中,尽管有些PCIe设备的Max_Payload_Size Supported参数可以为256B、512B、1024B或者更高,但是如果PCIe链路的对端设备可以支持的Max_Payload_Size参数为128B时,系统软件将使用对端设备的Max_Payload_Size Supported参数,初始化该设备的Max_Payload_Size参数,即选用PCIe链路两端最小的Max_Payload_Size Supported参数初始化Max_Payload_Size参数。
在多数x86处理器系统的MCH或者ICH中,Max_Payload_Size Supported参数为128B。这也意味着在x86处理器中,与MCH或者ICH直接相连的PCIe设备进行DMA读写时,数据的有效负载不能超过128B。而在PowerPC处理器系统中,该参数大多为256B。
目前在大多数EP中,Max_Payload_Size Supported参数不大于512B,因为在大多数处理器系统的RC中,Max_Payload_Size Supported参数也不大于512B。因此即便EP支持较大的Max_Payload_Size Supported参数,并不会提高数据传送效率。
而Max_Payload_Size参数的大小与PCIe链路的传送效率成正比,该参数越大,PCIe链路带宽的利用率越高,该参数越小,PCIe链路带宽的利用率越低。
PCIe总线规范规定,对于实时性要求较高的PCIe设备,Max_Payload_Size参数不应设置过大,因此这个参数有时会低于PCIe链路允许使用的最大值。
5.4.2 Max_Read_Request_Size参数
Max_Read_Request_Size参数由PCIe设备决定,该参数规定了PCIe设备一次能从目标设备读取多少数据。
Max_Read_Request_Size参数在Device Control寄存器中定义。该参数与存储器读请求TLP的Length字段相关,其中Length字段不能大于Max_Read_Request_Size参数。在存储器读请求TLP中,Length字段表示需要从目标设备读取多少数据。
值得注意的是,Max_Read_Request_Size参数与Max_Payload_Size参数间没有直接联系,Max_Payload_Size参数仅与存储器写请求和存储器读完成报文相关。
PCIe总线规定存储器读请求,其读取的数据长度不能超过Max_Read_Request_Size参数,即存储器读TLP中的Length字段不能大于这个参数。如果一次存储器读操作需要读取的数据范围大于Max_Read_Request_Size参数时,该PCIe设备需要向目标设备发送多个存储器读请求TLP。
PCIe总线规定Max_Read_Request_Size参数的最大值为4KB,但是系统软件需要根据硬件特性决定该参数的值。因为PCIe总线规定EP在进行存储器读请求时,需要具有足够大的缓冲接收来自目标设备的数据。
如果一个EP的Max_Read_Request_Size参数被设置为4KB,而且这个EP每发出一个4KB大小存储器读请求时,EP都需要准备一个4KB大小的缓冲。这对于绝大多数EP,这都是一个相当苛刻的条件。为此在实际设计中,一个EP会对Max_Read_Request_Size参数的大小进行限制。
5.4.3 RCB参数
RCB位在Link Control寄存器中定义。RCB位决定了RCB参数的值,在PCIe总线中,RCB参数的大小为64B或者128B,如果一个PCIe设备没有设置RCB的大小,则RC的RCB参数缺省值为64B,而其他PCIe设备的RCB参数的缺省值为128B。PCIe总线规定RC的RCB参数的值为64B或者128B,其他PCIe设备的RCB参数为128B。
在PCIe总线中,一个存储器读请求TLP可能收到目标设备发出的多个完成报文后,才能完成一次存储器读操作。因为在PCIe总线中,一个存储器读请求最多可以请求4KB大小的数据报文,而目标设备可能会使用多个存储器读完成TLP才能将数据传递完毕。
当一个EP向RC或者其他EP读取数据时,这个EP首先向RC或者其他EP发送存储器读请求TLP;之后由RC或者其他EP发送存储器读完成TLP,将数据传递给这个EP。
如果存储器读完成报文所传递数据的地址范围没有跨越RCB参数的边界,那么数据发送端只能使用一个存储器完成报文将数据传递给请求方,否则可以使用多个存储器读完成TLP。
假定一个EP向地址范围为0xFFFF-0000~0xFFFF-0010这段区域进行DMA读操作,RC收到这个存储器读请求TLP后,将组织存储器读完成TLP,由于这段区域并没有跨越RCB边界,因此RC只能使用一个存储器读完成TLP完成数据传递。
如果存储器读完成报文所传递数据的地址范围跨越了RCB边界,那么数据发送端(目标设备)可以使用一个或者多个完成报文进行数据传递。数据发送端使用多个存储器读完成报文完成数据传递时,需要遵循以下原则。
第一个完成报文所传送的数据,其起始地址与要求的起始地址相同。其结束地址或者为要求的结束地址(使用一个完成报文传递所有数据),或者为RCB参数的整数倍(使用多个完成报文传递数据)。
最后一个完成报文的起始地址或者为要求的起始地址(使用一个完成报文传递所有数据),或者为RCB参数的整数倍(使用多个完成报文传递数据)。其结束地址必须为要求的结束地址。
中间的完成报文的起始地址和结束地址必须为RCB参数的整数倍。
当RC或者EP需要使用多个存储器读完成报文将0xFFFE-FFF0~0xFFFF-00C7之间的数据发送给数据请求方时,可以将这些完成报文按照表5-9方式组织。
&表5-9 存储器读完成报文的拆分方法
0xFFFE-FFF0~0xFFFE-FFFF
0xFFFE-FFF0~0xFFFE-FFFF
0xFFFE-FFF0~0xFFFE-FFFF
0xFFFF-0000~0xFFFF-003F
0xFFFF-0000~0xFFFF-007F
0xFFFF-0000~0xFFFF-00C7
0xFFFF-0040~0xFFFF-007F
0xFFFF-0080~0xFFFF-00C7
0xFFFF-0080~0xFFFF-00BF
0xFFFF-00C0~0xFFFF-00C7
上表提供的方式仅供参考,目标设备还可以使用其他拆分方法发送存储器读完成TLP。PCIe总线使用多个完成报文实现一次数据读请求的主要原因是考虑Cache行长度和流量控制。在多数x86处理器系统中,存储器读完成报文的数据长度为一个Cache行,即一次传送64B。除此之外,较短的数据完成报文占用流量控制的资源较少,而且可以有效避免数据拥塞。
&&&本章重点介绍PCIe总线的事务层。在PCIe总线层次结构中,事务层最易理解,同时也与系统软件直接相关。
这是流量控制Infinite FC Unit的要求,详见第9.3.2节。
有些PCIe设备可能没有Link Control寄存器。
From Wikipedia, the free encyclopedia
&&(Redirected from )
Not to be confused with .
PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe, is a
standard designed to replace the older , , and
bus standards. PCIe has numerous improvements over the aforementioned bus standards, including higher maximum system bus throughput, lower I/O pin count and smaller physical footprint, better performance-scaling for bus devices, a more detailed error detection and reporting mechanism, and native hot plug functionality. More recent revisions of the PCIe standard support hardware I/O virtualization.
The PCIe electrical interface is also used in a variety of other standards, most notably , a
expansion card interface.
Format specifications are maintained and developed by the
(PCI ), a group of more than 900 companies that also maintain the
specifications. PCIe 3.0 is the latest standard for expansion cards that is available on mainstream .
PCI Express is used in consumer, server, and industrial applications, as a motherboard-level interconnect (to link motherboard-mounted peripherals), a passive backplane interconnect and as an
interface for add-in boards.
In virtually all modern PCs, from consumer laptops and desktops to enterprise data servers, the PCIe bus serves as the primary motherboard-level interconnect, connecting the host system processor with both integrated-peripherals (surface mounted ICs) and add-on peripherals (expansion cards.) In most of these systems, the PCIe bus co-exists with 1 or more legacy PCI busses, for backward compatibility with the large body of legacy PCI peripherals.
Conceptually, the PCIe bus is like a high-speed
replacement of the older PCI/PCI-X bus, an interconnect bus using shared address/data lines.
A key difference between PCIe bus and the older PCI is the bus topology. PCI uses a shared
architecture, where the PCI host and all devices share a common set of address/data/control lines. In contrast, PCIe is based on point-to-point , with separate
links connecting every device to the
(host). Due to its shared bus topology, access to the older PCI bus is arbitrated (in the case of multiple masters), and limited to 1 master at a time, in a single direction. Furthermore, the older PCI's clocking scheme limits the bus clock to the slowest peripheral on the bus (regardless of the devices involved in the bus transaction). In contrast, a PCIe bus link supports full-duplex communication between any two endpoints, with no inherent limitation on concurrent access across multiple endpoints.
In terms of bus protocol, PCIe communication is encapsulated in packets. The work of packetizing and de-packetizing data and status-message traffic is handled by the transaction layer of the PCIe port (described later). Radical differences in electrical signaling and bus protocol require the use of a different mechanical form factor and expansion connectors (and thus, new motherboards and new adapter boards); PCI slots and PCIe slots are not interchangeable. At the software level, PCIe preserves backward compatibility with PCI; legacy PCI system software can detect and configure newer PCIe devices without explicit support for the PCIe standard, though PCIe's new features are inaccessible.
The PCIe link between 2 devices can consist of anywhere from 1 to 32 lanes. In a multi-lane link, the packet data is striped across lanes, and peak data-throughput scales with the overall link width. The lane count is automatically negotiated during device initialization, and can be restricted by either endpoint. For example, a single-lane PCIe (x1) card can be inserted into a multi-lane slot (x4, x8, etc.), and the initialization cycle auto-negotiates the highest mutually supported lane count. The link can dynamically down-configure the link to use fewer lanes, thus providing some measure of failure tolerance in the presence of bad or unreliable lanes. The PCIe standard defines slots and connectors for multiple widths: x1, x4, x8, x16, x32. This allows PCIe bus to serve both cost-sensitive applications where high throughput is not needed, as well as performance-critical applications such as 3D graphics, network (, multiport ), and enterprise storage (, .)
As a point of reference, a PCI-X (133&MHz 64 bit) device and PCIe device at 4-lanes (x4), Gen1 speed have roughly the same peak transfer rate in a single-direction: 1064MB/sec. The PCIe bus has the potential to perform better than the PCI-X bus in cases where multiple devices are transferring data communicating simultaneously, or if communication with the PCIe peripheral is .
PCIe devices communicate via a logical connection called an interconnect or link. A link is a point-to-point communication channel between 2 PCIe ports, allowing both to send/receive ordinary PCI-requests (configuration read/write, I/O read/write, memory read/write) and
(, ). At the physical level, a link is composed of 1 or more lanes. Low-speed peripherals (such as an
) use a single-lane (&1) link, while a graphics adapter typically uses a much wider (and thus, faster) 16-lane link.
A lane is composed of a transmit and receive pair of differential lines. Each lane is composed of 4 wires or signal paths, meaning conceptually, each lane is a
, transporting data packets in 8 bit 'byte' format, between endpoints of a link, in both directions simultaneously. Physical PCIe slots may contain from one to thirty-two lanes, in powers of two (1, 2, 4, 8, 16 and 32). Lane counts are written with an x prefix (e.g., x16 represents a sixteen-lane card or slot), with x16 being the largest size in common use.
The bonded serial format was chosen over a traditional parallel bus format due to the latter's inherent limitations, including single-duplex operation, excess signal count and an inherently lower
due to . Timing skew results from separate electrical signals within a parallel interface traveling down different-length conductors, on potentially different
layers, at possibly different . Despite being transmitted simultaneously as a single , signals on a parallel interface experience different travel times and arrive at their destinations at different moments. When the interface
is increased to a point where its inverse (i.e., its clock period) is shorter than the largest possible time between signal arrivals, the signals no longer arrive with sufficient coincidence to make recovery of the transmitted word possible. Since timing skew over a parallel bus can amount to a few nanoseconds, the resulting bandwidth limitation is in the range of hundreds of megahertz.
A serial interface does not exhibit timing skew because there is only one differential signal in each direction within each lane, and there is no external clock signal since clocking information is embedded within the serial signal. As such, typical bandwidth limitations on serial signals are in the multi-gigahertz range. PCIe is just one example of a general trend away from parallel buses to serial interconnects. Other examples include , , ,
Multichannel serial design increases flexibility by allocating slow devices to fewer lanes than fast devices.
Various PCI slots. From top to bottom:
PCI Express &4
PCI Express &16
PCI Express &1
PCI Express &16
Conventional PCI (32-bit)
A PCIe card fits into a slot of its physical size or larger (maximum x16), but may not fit into a smaller PCIe slot (x16 in a x8 slot). Some slots use open-ended sockets to permit physically longer cards and negotiates the best available electrical connection. The number of lanes actually connected to a slot may also be less than the number supported by the physical slot size.
An example is a &8 slot that actually only runs at &1. These slots allow any &1, &2, &4 or &8 card, though only running at &1 speed. This type of socket is called a &8 (&1 mode) slot, meaning it physically accepts up to &8 cards but only runs at &1 speed. The advantage is that it can accommodate a larger range of PCIe cards without requiring motherboard hardware to support the full transfer rate. This keeps design and implementation costs down.
The following table identifies the conductors on each side of the
on a x4 PCI Express card. The solder side of the
(PCB) is the A side, and the component side is the B side.
PCI express &4 connector pinout
PinSide BSide AComments
Pulled low to indicate card inserted
Link reactivation, power good.
Reference clock differential pair
Lane 0 transmit data, + and &
Lane 0 receive data, + and &
Lane 1 transmit data, + and &
Lane 1 receive data, + and &
Lane 2 transmit data, + and &
Lane 2 receive data, + and &
Lane 3 transmit data, + and &
Lane 3 receive data, + and &
An &1 slot is a shorter version of this, ending after pin 18. &8 and &16 slots extend the pattern.
Ground pin
Zero volt reference
Supplies power to the PCIe card
Output pin
Signal from the card to the motherboard
Signal from the motherboard to the card
May be pulled low and/or sensed by multiple cards
Tied together on card
Not presently used, do not connect
PCI Express cards are allowed a maximum power consumption of 25W (&1: 10W for power-up). Low profile cards are limited to 10W (&16 to 25W). PCI Express Graphics (PEG) cards may increase power (from slot) to 75W after configuration (3.3V/3A + 12V/5.5A). Optional connectors add 75W (6-pin) or 150W (8-pin) power for up to 300W total.
A WLAN PCI Express Mini Card and its connector.
MiniPCI and MiniPCI Express cards in comparison
PCI Express Mini Card (also known as Mini PCI Express, Mini PCIe, and Mini PCI-E) is a replacement for the
form factor, based on PCI Express. It is developed by the . The host device supports both PCI Express and
2.0 connectivity, and each card may use either standard. Most laptop computers built after 2005 are based on PCI Express and can have several Mini Card slots.[]
PCI Express Mini Cards are 30&50.95&mm. There is a 52 pin edge connector, consisting of two staggered rows on a 0.8&mm pitch. Each row has 8 contacts, a gap equivalent to 4 contacts, then a further 18 contacts. A half-length card is also specified 30&26.8&mm. Cards have a thickness of 1.0&mm (excluding components).
PCI Express Mini Card edge connector provide multiple connections and buses:
Wires to diagnostics LEDs for wireless network (i.e., ) status on computer's chassis
applications. (UIM signals on spec)
Future extension for another PCIe lane
1.5 and 3.3 volt power
This section may be
to readers. In particular, mSATA is never defined. The relationship between mSATA and PCIe is not explained. Was mPCIe designed to include SATA? Is it just a different bus re-purposing the same connector? If so, are dual-mode slots possible? Reading
does not help. P suggestions may be found on the . (October 2011)
Despite the mini-PCI Express form factor, a mini-PCI Express slot must have support for the electrical connections an mSATA drive requires. For this reason, only certain notebooks are compatible with mSATA drives. Most compatible systems are based on Intel's newest Sandy Bridge processor architecture, using the new Huron River platform.
Notebooks like Lenovo's newest T-Series, W-Series, and X-Series ThinkPads released in March-April of 2011 have support for an mSATA SSD card in their WWAN card slot. The ThinkPad Edge E220s/E420s, and the Lenovo IdeaPad Y460/Y560 also support mSATA.
Some notebooks (notably the , the , and the Dell mini9 and mini10) use a variant of the PCI Express Mini Card as an . This variant uses the reserved and several non-reserved pins to implement SATA and IDE interface passthrough, keeping only USB, ground lines, and sometimes the core PCIe 1x bus intact. This makes the 'miniPCIe' flash and solid state drives sold for netbooks largely incompatible with true PCI Express Mini implementations.
Also, the typical Asus miniPCIe SSD is 71mm long, causing the Dell 51mm model to often be (incorrectly) referred to as half length. A true 51mm Mini PCIe SSD was announced in 2009, with two stacked PCB layers, which allows for higher storage capacity. The announced design preserves the PCIe interface, making it compatible with the standard mini PCIe slot. No working product has yet been developed, likely as a result of the popularity of the alternative variant.
PCI Express External Cabling (also known as External PCI Express, Cabled PCI Express, or ePCIe) specifications were released by the
in February 2007.
Standard cables and connectors have been defined for x1, x4, x8, and x16 link widths, with a transfer rate of 250&MB/s per lane. The PCI-SIG also expects the norm will evolve to reach the 500&MB/s, as in PCI Express 2.0. The maximum cable length remains undetermined. An example of the uses of Cabled PCI Express is a metal enclosure, containing a number of PCI slots and PCI-to-ePCIe adapter circuitry. This device would not be possible had it not been for the ePCIe spec.
There are several other expansion card types derived from PCIe. These include:
Low height card
: successor to the
form factor (with &1 PCIe and USB 2.0; hot-pluggable)
PCI Express ExpressModule: a hot-pluggable modular form factor defined for servers and workstations
: similar to the / form factor (with &4 PCIe or Serial RapidI/O)
: a complement to
for supports serial based backplane topologies
: a complement t supports processor and I/O modules on ATCA boards (&1, &2, &4 or &8 PCIe).
: a tiny expansion card format (43 x 65&mm) for embedded and small form it implements two x1 PCIe links on a high-density connector along with USB, I2C, and up to 100 points of I/O.
: A variant from Super Micro Computer Inc designed for use in low profile rack mounted chassis. It has the connector bracket reversed so it cannot fit in a normal PCI Express socket, but is pin compatible and may be inserted if the bracket is removed.
: A variant from Intel that combines
and PCIe protocols in a form factor compatible with .
While in early development, PCIe was initially referred to as HSI (for High Speed Interconnect), and underwent a name change to 3GIO (for 3rd Generation I/O) before finally settling on its
name PCI Express. It was first drawn up by a technical working group named the Arapaho Work Group (AWG) that, for initial drafts, consisted only of Intel engineers. Subsequently the AWG expanded to include industry partners.
PCIe is a technology under constant development and improvement. The current PCI Express implementation is version 3.0.
introduced PCIe 1.0a, with a data rate of 250&MB/s and a
of 2.5&/s.
introduced PCIe 1.1. This updated specification includes clarifications and several improvements, but is fully compatible with PCI Express 1.0a. No changes were made to the data rate.
announced the availability of the PCI Express Base 2.0 specification on 15 January 2007. The PCIe 2.0 standard doubles the per-lane throughput from the PCIe 1.0 standard's 250 MB/s to 500 MB/s. This means a 32-lane PCI connector (x32) can support throughput up to 16 GB/s aggregate. The PCIe 2.0 standard uses a base clock speed of 2.5&GHz, while the first version operates at 1.25&GHz.
PCIe 2.0 motherboard slots are fully backward compatible with PCIe v1.x cards. PCIe 2.0 cards are also generally backward compatible with PCIe 1.x motherboards, using the available bandwidth of PCI Express 1.1. Overall, graphic cards or motherboards designed for v2.0 will work with the other being v1.1 or v1.0.
The PCI-SIG also said that PCIe 2.0 features improvements to the point-to-point data transfer protocol and its software architecture.
's first PCIe 2.0 capable chipset was the
and boards began to ship from various vendors (, , ) as of October 21, 2007. AMD started supporting PCIe 2.0 with its
and nVidia started with the . All of Intel's prior chipsets, including the
chipset, supported PCIe 1.1 or 1.0a.
PCI Express 2.1 supports a large proportion of the management, support, and troubleshooting systems planned for full implementation in PCI Express 3.0. However, the speed is the same as PCI Express 2.0. Most motherboards sold currently come with PCI Express 2.1 connectors.
PCI Express 3.0 Base specification revision 3.0 was made available in November 2010, after multiple delays. In August 2007, PCI-SIG announced that PCI Express 3.0 would carry a bit rate of 8
per second, and that it would be backwards compatible with existing PCIe implementations. At that time, it was also announced that the final specification for PCI Express 3.0 would be delayed until 2011. New features for the PCIe 3.0 specification include a number of optimizations for enhanced signaling and data integrity, including transmitter and receiver equalization,
improvements, clock data recovery, and channel enhancements for currently supported topologies.
Following a six-month technical analysis of the feasibility of scaling the PCIe interconnect bandwidth, PCI-SIG's analysis found out that 8 gigatransfers per second can be manufactured in mainstream silicon process technology, and can be deployed with existing low-cost materials and infrastructure, while maintaining full compatibility (with negligible impact) to the PCIe protocol stack.
PCIe 2.0 delivers 5&GT/s, but uses an
scheme that results in a 20 percent ((10-8)/10) overhead on the raw bit rate. PCIe 3.0 removes the requirement for 8b/10b encoding, and instead uses a technique called "scrambling" that applies a known binary polynomial to a data stream in a feedback topology. Because the scrambling polynomial is known, the data can be recovered by running it through a feedback topology using the inverse polynomial. and also uses a 128b/130b encoding scheme, reducing the overhead to approximately 1.5% ((130-128)/130), as opposed to the 20% overhead of 8b/10b encoding used by PCIe 2.0. PCIe 3.0's 8&GT/s bit rate effectively delivers double PCIe 2.0 bandwidth. PCI-SIG expects the PCIe 3.0 specifications to undergo rigorous technical vetting and validation before being released to the industry. This process, which was followed in the development of prior generations of the PCIe Base and various form factor specifications, includes the corroboration of the final electrical parameters with data derived from test silicon and other simulations conducted by multiple members of the PCI-SIG.
On November 18, 2010, the PCI Special Interest Group officially published the finalized PCI Express 3.0 specification to its members to build devices based on this new version of PCI Express.
PCI Express has replaced
as the default interface for graphics cards on new systems. With a few exceptions, all
being released as of 2009 and 2010 from
use PCI Express. NVIDIA uses the high bandwidth data transfer of PCIe for its
(SLI) technology, which allows multiple graphics cards of the same chipset and model number to run in tandem, allowing increased performance. ATI has also developed a multi-GPU system based on PCIe called . AMD and NVIDIA have released motherboard chipsets that support up to four PCIe &16 slots, allowing tri-GPU and quad-GPU card configurations.
PCI Express has displaced a major portion of the add-in card market. PCI Express was originally only common in , onboard ,
and graphics cards. Most sound cards, TV/capture-cards, modems, serial port/USB/Firewire cards, network/WiFi cards that would have used the conventional PCI in the past have moved to PCI Express x8, x4, or x1. While some motherboards have conventional PCI slots, these are primarily for legacy cards and are being phased out.
The PCIe link is built around dedicated unidirectional couples of serial (1-bit), point-to-point connections known as lanes. This is in sharp contrast to the earlier PCI connection, which is a bus-based system where all the devices share the same bidirectional, 32-bit or 64-bit parallel bus.
PCI Express is a , consisting of a , a , and a . The Data Link Layer is subdivided to include a
(MAC) sublayer. The Physical Layer is subdivided into logical and electrical sublayers. The Physical logical-sublayer contains a physical coding sublayer (PCS). The terms are borrowed from the
networking protocol model.
The PCIe Physical Layer (PHY, PCIEPHY, PCI Express PHY, or PCIe PHY) specification is divided into two sub-layers, corresponding to electrical and logical specifications. The logical sublayer is sometimes further divided into a MAC sublayer and a PCS, although this division is not formally part of the PCIe specification. A specification published by Intel, the PHY Interface for PCI Express (PIPE), defines the MAC/PCS functional partitioning and the interface between these two sub-layers. The PIPE specification also identifies the physical media attachment (PMA) layer, which includes the
and ot however, since SerDes implementations vary greatly among
vendors, PIPE does not specify an interface between the PCS and PMA.
At the electrical level, each lane consists of two unidirectional
pairs at 2.525&/s. Transmit and receive are separate , for a total of 4 data wires per lane.
A connection between any two PCIe devices is known as a link, and is built up from a collection of 1 or more lanes. All devices must minimally support single-lane (x1) link. Devices may optionally support wider links composed of 2, 4, 8, 12, 16, or 32 lanes. This allows for very good compatibility in two ways:
A PCIe card physically fits (and works correctly) in any slot that is at least as large as it is (e.g., an &1 sized card will work in any sized slot);
A slot of a large physical size (e.g., &16) can be wired electrically with fewer lanes (e.g., &1, &4, &8, or &12) as long as it provides the ground connections required by the larger physical slot size.
In both cases, PCIe negotiates the highest mutually supported number of lanes. Many graphics cards, motherboards and bios versions are verified to support &1, &4, &8 and &16 connectivity on the same connection.
Even though the two would be signal-compatible, it is not usually possible to place a physically larger PCIe card (e.g., a &16 sized card) into a smaller slot&&though if the PCIe slots are open-ended, by design or by hack, some motherboards will allow this.[]
The width of a PCIe connector is 8.8&mm, while the height is 11.25&mm, and the length is variable. The fixed section of the connector is 11.65&mm in length and contains 2 rows of 11 (22 pins total), while the length of the other section is variable depending on the number of lanes. The pins are spaced at 1&mm intervals, and the thickness of the card going into the connector is 1.8&mm.
LanesPinsLength
TotalVariableTotalVariable
2&82 = 164
2&71 = 142
PCIe sends all control messages, including interrupts, over the same links used for data. The serial protocol can never be blocked, so latency is still comparable to conventional PCI, which has dedicated interrupt lines.
Data transmitted on multiple-lane links is interleaved, meaning that each successive byte is sent down successive lanes. The PCIe specification refers to this interleaving as data striping. While requiring significant hardware complexity to synchronize (or ) the incoming striped data, striping can significantly reduce the latency of the nth byte on a link. Due to padding requirements, striping may not necessarily reduce the latency of small data packets on a link.
As with other high data rate serial transmission protocols, clocking information is
in the signal. At the physical level, PCI Express 2.0 utilizes the
scheme to ensure that strings of consecutive ones or consecutive zeros are limited in length. This was used to prevent the receiver from losing track of where the bit edges are. In this coding scheme every 8 (uncoded) payload bits of data are replaced with 10 (encoded) bits of transmit data, causing a 20% overhead in the electrical bandwidth. To improve the available bandwidth, PCI Express version 3.0 employs 128b/130b encoding instead: similar but with much lower overhead.
Many other protocols (such as ) use a different form of encoding known as
to embed clock information into data streams. The PCIe specification also defines a scrambling algorithm, but it is used to reduce
(EMI) by preventing repeating data patterns in the transmitted data stream.
The Data Link Layer performs three vital services for the PCIe express link: (1) sequence the transaction layer packets (TLPs) that are generated by the transaction layer, (2) ensure reliable delivery of TLPs between two endpoints via an acknowledgement protocol ( and
signaling) that explicitly requires replay of unacknowledged/bad TLPs, (3) initialize and manage flow control credits
On the transmit side, the data link layer generates an incrementing sequence number for each outgoing TLP. It serves as a unique identification tag for each transmitted TLP, and is inserted into the header of the outgoing TLP. A 32-bit
code (known in this context as Link CRC or LCRC) is also appended to the end of each outgoing TLP.
On the receive side, the received TLP's LCRC and sequence number are both validated in the link layer. If either the LCRC check fails (indicating a data error), or the sequence-number is out of range (non-consecutive from the last valid received TLP), then the bad TLP, as well as any TLPs received after the bad TLP, are considered invalid and discarded. The receiver sends a negative acknowledgement message (NAK) with the sequence-number of the invalid TLP, requesting re-transmission of all TLPs forward of that sequence-number. If the received TLP passes the LCRC check and has the correct sequence number, it is treated as valid. The link receiver increments the sequence-number (which tracks the last received good TLP), and forwards the valid TLP to the receiver's transaction layer. An ACK message is sent to remote transmitter, indicating the TLP was successfully received (and by extension, all TLPs with past sequence-numbers.)
If the transmitter receives a NAK message, or no acknowledgement (NAK or ACK) is received until a timeout period expires, the transmitter must retransmit all TLPs that lack a positive acknowledgement (ACK). Barring a persistent malfunction of the device or transmission medium, the link-layer presents a reliable connection to the transaction layer, since the transmission protocol ensures delivery of TLPs over an unreliable medium.
In addition to sending and receiving TLPs generated by the transaction layer, the data-link layer also generates and consumes DLLPs, data link layer packets. ACK and NAK signals are communicated via (DLLP), as are flow control credit information, some power management messages and flow control credit information (on behalf of the transaction layer.)
In practice, the number of in-flight, unacknowledged TLPs on the link is limited by two factors: the size of the transmitter's replay buffer (which must store a copy of all transmitted TLPs until they the remote receiver ACKs them), and the flow control credits issued by the receiver to a transmitter. PCI Express requires all receivers to issue a minimum number of credits, to guarantee a link allows sending PCIConfig TLPs and message TLPs.
PCI Express implements split transactions (transactions with request and response separated by time), allowing the link to carry other traffic while the target device gathers data for the response.
PCI Express uses credit-based flow control. In this scheme, a device advertises an initial amount of credit for each received buffer in its transaction layer. The device at the opposite end of the link, when sending transactions to this device, counts the number of credits each TLP consumes from its account. The sending device may only transmit a TLP when doing so does not make its consumed credit count exceed its credit limit. When the receiving device finishes processing the TLP from its buffer, it signals a return of credits to the sending device, which increases the credit limit by the restored amount. The credit counters are modular counters, and the comparison of consumed credits to credit limit requires . The advantage of this scheme (compared to other methods such as wait states or handshake-based transfer protocols) is that the latency of credit return does not affect performance, provided that the credit limit is not encountered. This assumption is generally met if each device is designed with adequate buffer sizes.
PCIe 1.x is often quoted to support a data rate of 250&MB/s in each direction, per lane. This figure is a calculation from the physical signaling rate (2.5&) divided by the encoding overhead (10 bits per byte.) This means a sixteen lane (x16) PCIe card would then be theoretically capable of 16&250&MB/s = 4&GB/s in each direction. While this is correct in terms of data bytes, more meaningful calculations are based on the usable data payload rate, which depends on the profile of the traffic, which is a function of the high-level (software) application and intermediate protocol levels.
Like other high data rate serial interconnect systems, PCIe has a protocol and processing overhead due to the additional transfer robustness (CRC and acknowledgements). Long continuous unidirectional transfers (such as those typical in high-performance storage controllers) can approach &95% of PCIe's raw (lane) data rate. These transfers also benefit the most from increased number of lanes (&2, &4, etc.) But in more typical applications (such as a
controller), the traffic profile is characterized as short data packets with frequent enforced acknowledgements. This type of traffic reduces the efficiency of the link, due to overhead from packet parsing and forced interrupts (either in the device's host interface or the PC's CPU.) Being a protocol for devices connected to the same , it does not require the same tolerance for transmission errors as a protocol for communication over longer distances, and thus, this loss of efficiency is not particular to PCIe.
Theoretically, external PCIe could give a notebook the graphics power of a desktop, by connecting a notebook with any PCIe desktop video card (enclosed in its own external housing, with strong power supply and cooling); This is possible with an , which provides single lane v1.1 performance.
IBM/Lenovo has also included a PCI-Express slot in their Advanced Docking Station 250310U. It provides a half sized slot with an x16 length slot, but only x1 connectivity. However, docking stations with expansion slots are becoming less common as the laptops are getting more advanced video cards and either
interfaces, or DVI-D pass through for port replicators and docking stations.
Additionally,
has developed
external PCIe Video Cards that can be used for advanced graphic applications. These video cards require a PCI Express x8 or x16 slot for the interconnection cable. In 2008, AMD announced the
technology, based on a proprietary cabling solution that is compatible with PCIe x8 signal transmissions. This connector is available on the Fujitsu Amilo and the Acer Ferrari One notebooks. Only Fujitsu has an actual external box available, which also works on the Ferrari One. Recently Acer launched the Dynavivid graphics dock for XGP. Shuttle introduced their own external graphics solutions, GXT.
There are now card hubs in development that one can connect to a laptop through an ExpressCard slot, though they are currently rare, obscure, or unavailable on the open market. These hubs can have full-sized cards placed in them.
Magma and ViDock also makes use of ExpressCard and implements the usage of External graphic solutions .ViDock are expansion chassis tailored specifically for adapting PCI Express graphics cards for use with ExpressCard equipped laptop PCs. This enables user to make use of connecting PCIe cards externally. Although, the developments in these technologies are still ongoing. Other examples that underwent are - MSI GUS, Asus XG Station.
Recently, Intel and Apple introduced , which allows for external PCI(e) devices.
Several communications standards have emerged based on high bandwidth serial architectures. These include , , ,
and . The differences are based on the tradeoffs between flexibility and extensibility vs latency and overhead. An example of such a tradeoff is adding complex header information to a transmitted packet to allow for complex routing (PCI Express is not capable of this). The additional overhead reduces the effective bandwidth of the interface and complicates bus discovery and initialization software. Also making the system hot-pluggable requires that software track network topology changes. Examples of buses suited for this purpose are InfiniBand and StarFabric.
Another example is making the packets shorter to decrease latency (as is required if a bus must operate as a memory interface). Smaller packets mean packet headers consume a higher percentage of the packet, thus decreasing the effective bandwidth. Examples of bus protocols designed for this purpose are RapidIO and HyperTransport.
PCI Express falls somewhere in the middle, targeted by design as a system interconnect () rather than a device interconnect or routed network protocol. Additionally, its design goal of software transparency constrains the protocol and raises its latency somewhat.
When developing and/or troubleshooting the PCI Express bus, examination of hardware signals can be very important to find the problems.
are tools that collect, analyze, decode, store signals so people can view the high-speed waveforms at their leisure.
for ADD2 DVI adapter cards
. MindShare.
. PCI_SIG.
. Computer.howstuffworks.com.
. PCI-SIG.
. Zone.ni.com. .
. Frequently Asked Questions. Adex Electronics. 1998.
PCI-SIG: Board Design Guidelines for PCI Express Architecture 2004 p. 19
. Pci-Sig. .
(PDF) (Press release). . 15 January 2007. & note that in this press release the term aggregate bandwidth refers to the sum of incoming an using this terminology the aggregate bandwidth of full duplex 100BASE-TX is 200 Mbit/s
Tony Smith (11 October 2006). . The Register.
Gary Key & Wesley Fink (21 May 2007). . .
Anh Huynh (8 February 2007). . .
(PDF). Intel.
Hachman, Mark. . Pcmag.com.
. ExtremeTech. 9 August 2007.
. PCI-SIG.
. 18 November 2010.
. Pinouts.ru.
. Technical Publications Pune.
. Magma.com.
. TheInquirer.
. Custompcmag.co.uk.
. VR-Zone. .
. 307.ibm.com. .
. Nvidia.com.
. Ati.amd.com.
PCI Express System Architecture; 1st Ed; Ravi Budruk / Don Anderson / Tom S ; .
Introduction to PCI Express&: A Hardware and Software Developer's Guide; 1st Ed; 325 2003; .
Complete PCI Express Reference&: Design Implications for Hardware and Software Developers; 1st Ed; ; .
阅读(...) 评论() &

我要回帖

更多关于 AMDCPU 的文章

 

随机推荐