----------------------------------------------------------------------------------
@MSGID: 1@dont-email.me> 85e37ea8
@REPLY: <56jQM.245879$2ph4.169306@fx14.iad>
2ba54c9d
@REPLYADDR BGB <cr88192@gmail.com>
@REPLYTO 2:5075/128 BGB
@CHRS: CP866 2
@RFC: 1 0
@RFC-Message-ID: 1@dont-email.me>
@RFC-References:
<fbed57b4-1553-4b63-b39e-c130754b3aa8n@googlegroups.com> <memo.20230925074837.16292U@jgd.cix.co.uk> <8ThQM.146454$bmw6.26202@fx10.iad>
1@dont-email.me> <56jQM.245879$2ph4.169306@fx14.iad>
@TZUTC: -0500
@PID: Mozilla/5.0 (Windows NT 10.0; Win64; x64;
rv:102.0) Gecko/20100101 Thunderbird/102.15.1
@TID: FIDOGATE-5.12-ge4e8b94
On 9/25/2023 12:06 PM, Scott Lurndal wrote:
> BGB <
cr88192@gmail.com> writes:
>> On 9/25/2023 10:41 AM, Scott Lurndal wrote:
>>> jgd@cix.co.uk (John Dallman) writes:
>
>>>> That`s an interesting new way to have your system run short of the right
>>>> kind of memory.
>>>
>>> Indeed. It`s not the path from memory to the core complex that is
>>> currently most interesting (although 256-bit wide (and higher)
mesh or crossbars
>>> aren`t uncommon), but rather the data path widths from I/O
>>> subsystems. 512-bit wide paths from network controllers and on-board
>>> non-coherent (or coherent, see CXL) coprocessors has become
>>> common. Supporting 80gbit/sec of network traffic into memory
>>> or the networking subsystem isn`t trivial.
>>>
>>> The memory bandwidth grows by adding controllers and striping across them
>>> for the most part.
>>
>> ?...
>>
>>
>> AFAIK, typical DIMMs have a 64-bit wide interface, and typical MOBOs
>> have 4 DIMM slots with DIMMs being filled in pairs.
>
> "typical" in what context? Home desktops? That`s certainly not
> typical for the data center or cloud servers. One chip I`m aware of
> has 20 dual-channel DDR5 memory controllers (one per every four
> cores).
>
Mostly desktop PCs.
I don`t have any data center or cloud servers, or personally know anyone
who does around here, ...
So, it seems reasonable to assume, most people don`t have them, thus,
they are not typical.
>>
>> This would seemingly imply that RAM would be mostly limited to a 128-bit
>> datapath (or 64-bit in unganged mode).
>
> That`s sufficient for the home desktop windows user, I suppose. It`s
> certainly not sufficient for cloud servers, enterprise data center servers,
> high-end networking appliances, et alia.
>
It seemed like the context was "common", which would mostly imply "stuff
people actually have", not necessarily high-end data-center servers,
which pretty much no one has apart from the companies running the data
centers.
But, then if say, a datacenter has 1 sever per 10k users (each user with
a desktop PC or similar); this would mean still only mean 0.01% of the
total computers were servers.
And, your typical desktop PC is only going to have 1 CPU and 2 or 4 RAM
sticks.
>
>>
>> Similarly, PCIe slots are effectively multiple serial lanes running in
>> parallel, etc...
>
> Gen 6 x16 has a boatload of bandwidth (128 gigabytes per second).
> Note that the serial lanes are only downstream (towards the endpoint)
> from the root complex. The root complex itself uses a parallel
> interconnect to the cache/memory subsystem on the host side.
>
OK.
>>
>> Typical onboard peripherals being connected either with PCIe lanes or
>> via an onboard LPC bus.
>
> There is nothing useful (or high performance) connected to
> LPC bus in decades. Even intel is deprecating it in modern chipsets
> with a nod for backward compatability (i.e. supporting in/out
> instructions to a subset of standard ISA peripherals like keyboard
> controllers). They`re even planning on getting rid of most of it entirely
> in the future and boot the processor directly into long mode so
> all the legacy compatibility stuff like the original PIC can
> be removed.
>
There is typically still an RS232 port and PS/2 keyboard and mouse ports
and similar...
But, yeah.
>>
>> Similarly, a typical motherboard only has a single CPU socket.
>
> Typical in what context?
>
Standard home desktop PC.
That is what most people are using, at least, excluding laptops,
tablets, and cell-phones.
>>
>>
>> Outside of the CPU itself, unclear where any of these wide interconnects
>> would be, or where they would be going.
>
> Did you read the post you responded to? How do you get 40gbytes/sec
> into your memory subsystem from an onboard 400gbit nic? Or 128 gbytes/sec
> from a PCIe root complex? Or from a PCIe CXL-cache memory extender?
>
Generally you don`t...
I think, if one has a 1GbE Ethernet port, they could maybe get 120 MB/s
or similar if it is going "full tilt".
--- Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.15.1
* Origin: A noiseless patient Spider (2:5075/128)
SEEN-BY: 5001/100 5005/49 5015/255 5019/40 5020/715
848 1042 4441 12000
SEEN-BY: 5030/49 1081 5058/104 5075/128
@PATH: 5075/128 5020/1042 4441