Results 1 to 3 of 3

Thread: One clock to rule them all?

  1. #1
    Anodized. Again. Konrad's Avatar
    Join Date
    Aug 2010
    Location
    Canada
    Posts
    1,060

    Default One clock to rule them all?

    Intel's X99/Wellsburg chipset integrates the clock generator directly onto the silicon of the big fat DH82031 PCH chip. This is heralded as a Really Important New Thing. It's supposed to make clock timing/sync issues less issuesome, reduce all sort of technobabble latencies and skews and jitters and hiccups, and consequently provide a stable uniform clock source all across the board.

    Intel believes, and rightfully so, that the clock should properly be integrated within the PCH - directly interfaced to the main processor, which itself directly controls/addresses high-bandwidth PCIe 3.0 lanes and DDR4 timings. The PCH also serves as the central bridge/controller for all the SATA and USB devices, PCI 2.0 lanes, motherboard services, etc. Intel claims this approach offers maximum possible overall system stability, especially at highest clock speeds.

    Asus/Gigabyte/EVGA/MSI/Asrock believe, and rightfully so, that the clock should properly be integrated to the motherboard itself - just like it's always been since the dawn of x86 machinery. Everything on the board, including the main processor and the PCH chipset and all the many things they service, is synced off the motherboard-mounted clock part. The mobo makers claim this external clock generator offers maximum possible overall system stability, especially at highest clock speeds. (And then, of course, they go a step further by extending the maximum/overclock speeds beyond Intel's integrated part spec, at least on their high-end stuff.)

    They can't both be better, so which is it?

    My experience with hobby-powered microcontrollers is that they typically include an integrated clock source (to offer minimum/basic functionality at lowest cost), but they can also interface with an external clock source (and gain higher performance through a little added part cost/complexity). I doubt this example has much relevance when compared to vastly faster, more powerful, and more complex X99 chipset/mobo componentry.
    My mind says Technic, but my body says Duplo.

  2. #2
    Will YOU be ready when the zombies rise? x88x's Avatar
    Join Date
    Oct 2008
    Location
    MD, USA
    Posts
    6,334

    Default Re: One clock to rule them all?

    I cannot speak to the low-level effects of the clock source being integrated vs not, but from a higher level, this looks like yet another move to integrate as much as possible into as few parts as possible; a trend that, imo, is a very good thing. I imagine the mobo manufacturers complained similarly when AMD, and later Intel, moved the RAM controller onto the CPU. From the technology consumer point of view (in this case, 'consumer' == everyone who uses the chips, regardless of role), for the vast majority of use cases, I see this as a very good thing. Reducing the number of parts in a system will always increase the system-level stability and reliability, and historically when Intel has been able to integrate more parts into the CPU, they have also driven the price down. Of course, this means that the only clock available will be the one that Intel puts on the CPU, which will likely disappoint some niche markets (at least in the short term).

    Long story short, this sounds like it will get us cheaper, more reliable, systems, at the cost of less flexibility.
    That we enjoy great advantages from the inventions of others, we should be glad of an opportunity to serve others by any invention of ours, and this we should do freely and generously.
    --Benjamin Franklin
    TBCS 5TB Club :: coilgun :: bench PSU :: mightyMite :: Zeus :: E15 Magna EV

  3. #3
    Anodized. Again. Konrad's Avatar
    Join Date
    Aug 2010
    Location
    Canada
    Posts
    1,060

    Default Re: One clock to rule them all?

    Hmm. I don't understand motherboard timings anyhow.

    I mean ... electrical charge propogates through an ideal copper conductor at roughly 2/3 the speed of light in vacuum. So, if my math is right, basically about 20cm per nanosecond. So, for example, an electronic signal leaves the processor chip, travels through about 10-15cm of mobo traces, travels through another 5-15cm of GPU card traces, and finally reaches the GPU chip about 1-2 nanoseconds later. That's assuming a reasonably direct path that doesn't zigzag too far off course across the circuit boards, one that doesn't pass through any additional glue-logic buffers or processing/addressing parts. One or two nanoseconds vs 4GHz processing translates into at least half a dozen clocks worth of latency, just for the signal to be physically transferred across the bus, before it hits any processing latencies within the chips themselves. Double that for a two-way communication. Double again if the bits need to travel back across the mobo to reach the DIMMs, or make a longer journey down towards a faraway 3rd PCIe slot or whatever.

    So I can't see how timing/sync slippages can realistically operate off a single clock generator part (wherever it is located) on such multi-GHz scales. Not without introducing all sorts of complex signal latencies. Ha, seems almost like the same problems encountered in radio signals - where partial signals reflect and echo like crazy and arrive all out of order - yet clever logic can efficiently reconstruct the whole mess into a complete and coherent data packet in real time.
    My mind says Technic, but my body says Duplo.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •