what will happen when transistor size reach limit?

Discussion in 'Overclocking & Hardware' started by weak beta male, Sep 7, 2018.

  1. weak beta male

    weak beta male New Member

    Joined:
    Aug 21, 2018
    Messages:
    28
    i read on wikipedia transistors can't be smaller than 5nm. and quantum computers are same speed in games, they're only faster for factorization. whats going to happen then?
     
  2. Hater

    Hater Member

    Joined:
    Nov 19, 2012
    Messages:
    2,410
    Location:
    Canberra
    we'll get computers that are really good at factorisation and the same amount of good at games as we have now?
     
  3. ShrimpBrime

    ShrimpBrime New Member

    Joined:
    Jul 16, 2018
    Messages:
    37
    I believe that we will be looking at 3D processors in the near future to become common place.
     
  4. theSeekerr

    theSeekerr Member

    Joined:
    Jan 19, 2010
    Messages:
    2,731
    Location:
    Prospect SA
    Either we get much better at parallel programming than we are now (scale out with more cores), or improvements in performance will taper off (even moreso than they already have), or improvements in performance will only be available in applications that can justify enormous die sizes and absurd cooling schemes (and even that has its limits), or general-purpose CPUs will gradually be supplanted by single-purpose circuitry that can be made faster.

    Or rather, some combination of all of those things.

    The reality is we're already ~7 years into this era - we haven't had good year-on-year scaling since Sandy Bridge back in 2011.
     
  5. Esposch

    Esposch Member

    Joined:
    Jan 14, 2010
    Messages:
    142
    Location:
    SE Melbourne (Knoxfield)
    The problem with that is cooling it. If one layer has a 95W TDP, what will 10 layers have?
    EDIT:
    Unless you layer things that are not active at the same time, such as a hardware H.264 decoder and a hardware H.265
     
    Last edited: Sep 7, 2018
  6. mAJORD

    mAJORD Member

    Joined:
    Jun 4, 2002
    Messages:
    9,160
    Location:
    Griffin , Brisbane
    If you're referring to process node - it's what's generally accepted as being the '3nm' for which serious problems are currently predicted - that is no one has a proper solution for achieving it practically, though it is possible - and it is expected this node will eventuate. Beyond that though is unknown.

    Ultimately though this won't be a 'solid' barrier we suddenly hit - Rather the process costs and difficulty in achieving yield will continue to spiral out of control over the next decade before we even hit 3nm.. Moore's law is already no longer a reality (though progress is still quite rapid) ,It's the difficulty - and resulting Costs to make these process nodes a reality that are the barrier, and This has already well and truly started - with fallout already occurring such as:

    A. Intel having huge , unprecedented problems and delays with 10nm (known as 7nm other foundries)
    B. The shock announcement that Global foundries will be flat-out abandoning all 7nm development literally as of last week - and no longer pursuing new process nodes. period.

    This is an interesting graph that's been floating around and should put the cost part of the equation into the picture.. Note this is just the cost to bring an SoC into production.. it doesn't include the base R&D for the high level uARch.

    nano3.png

    Now let that those numbers sink in - and one might understand why things are changing - even right now with 7nm on the horizon. The uptake will be slow, and end products will be expensive.

    If you take Nvidia's current client GPU lineup or example - at 16nm , they taped out 4 die's GP102 104 and 106 and 108... Now let's say it cost $100M to bring the largest of them into production - and a bit less (progressivly) for the smaller die's.. so a few hundred Million all up.

    At 7nm it's going to cost nearly as much as it did to bring the entire lineup into producton, as it will for just ONE DIE.. I think we can see where this is going - it's likely you simply won't see that many die dedicated to any manufacturers lineup - the bleeding edge nodes will remain for high margin products - probably out of the client space initially.

    Now from a yield pt of view , there'll be some relief at the 2nd generation 7nm - that is when Foundries introduce EUV , to avoid multi-paterning around 2020. After that though is anyones guess still. Moving on to 5nm though, it's near doubled again - and so on.


    Frankly, this is a separate issue entirely. We're talking about Process technology here - up until the recent challenges there's been virtually no barrier to increasing core counts, it's purely the software lagging behind.

    As we run into the 'process wall', the issue simply becomes we cannot increase density or transistor performance. and this means you cannot increase parallelism based on process alone,even if you can program for it. As such, it will be undoubtedly be compensated for at an architecture level - If you have to invest half a billion dollars on taping out a die, you may just be better off investing 3/4 of that in architecture, and pushing it onto more affordable process.

    Whilst on the CPU side, there's very little fruit left on the tree for increasing single threaded perf (which is what you're refering to I think) - there's quite a bit when it comes to increasing parallelism efficiently.. Remember bulldozer? don't think some of it's concepts won't see a return if we finally get to a point where Single threaded performance is of little importance. As it is Zen as proven that archtecture can compensate for severe process handicap ( GF 14nm/ "12nm") vs Intel 14nm++)

    GPU's are a little harder, but we're already seeing glimpse into the future with Turing - a return to specialized functional units in RT and Tensor cores. Over 10 yrs ago we moved to a unified architecture when benefits outweighed the increase in die size - and never looked back, but as we move into this new era we're already seeing a return to the fixed function concept. and it will take careful programming to make use of - so I guess that comes back to your other point - the onus being on programming - The Free lunch is really over this time?
     
    dirtyd likes this.
  7. theSeekerr

    theSeekerr Member

    Joined:
    Jan 19, 2010
    Messages:
    2,731
    Location:
    Prospect SA
    Ultimately we're talking about the same stuff here, you just spent more time to type a long answer ;)

    Yes, I'm talking about single threaded performance. When transistor densities doubled regularly, it was practical to increase clockspeeds inside the same power budget. That is no longer the case (look how long it's taken to bring a series of very similar architectures from 3.5Ghz to 5GHz), and there's nothing on the horizon that will change that. Scaling out hasn't been a great solution in the consumer space thus far - the jump from 1 core to 2 was huge, but every subsequent jump has addressed increasingly niche requirements.
     
  8. ShrimpBrime

    ShrimpBrime New Member

    Joined:
    Jul 16, 2018
    Messages:
    37
    Heat is around non issue as we speak. 4 thread processors at 3+ ghz at only 35w is common place already. But 3D isn't just Cpu cores. It will also be integrated memory via die to die or die to wafer for example. Another benefit is shorter interconnect.

    Another speculation of processing which is already in effect via the internet and AI, is the ability to utilize processors not at your home but every where or anywhere as needed.

    The future is likely to see within the next 50 years to be very integrated from one point to another. We will all have instantaneous access to information and each other in virtual and or augmented realities.

    With the above, a single core performance will be thought of as a thing of the past. It is possible that information will simply be pre-processed and readily available instantaneously. No lag, no wait time.

    We wait for software to catch up to hardware in almost all cases. I figure that the way we look at computing today will be vastly different in a few decades.
     
  9. Matthew kane

    Matthew kane Member

    Joined:
    Jan 27, 2014
    Messages:
    1,686
    Location:
    Melbourne
    Soldered on CPU on mobo's acting as an AIO was one of the suggested pathways they mentioned they may consider beyond silicon. They should talk to IBM.
     
    ShrimpBrime likes this.
  10. terrastrife

    terrastrife Member

    Joined:
    Jun 2, 2006
    Messages:
    18,323
    Location:
    ADL/SA The Monopoly State
    Optane runs at like 600C. There's still plenty of surface area.
     
  11. demiurge3141

    demiurge3141 Member

    Joined:
    Aug 19, 2005
    Messages:
    1,054
    Location:
    Melbourne 3073
    quantum computing
     
  12. Ratzz

    Ratzz Member

    Joined:
    Mar 13, 2013
    Messages:
    6,997
    Location:
    Cheltenham East 3192
    I'm not a believer in limits, beyond the laws of physics themselves.

    I don't believe there will necessarily be a hard limit on how small a transistor can be. I don't believe that transistor size will necessarily be an issue anyway. I don't even think transistors are part of the future of computing. I don't believe that costs will be a limiting factor in the future for consumer level stuff. I don't believe current enterprise stuff will even come close to whatever they are selling the consumer or enthusiast in the future.

    For every problem, an answer will be found. This is the way it has always been, and the way it will continue to be. The guidance system for the Apollo 11, the machine which landed us on the moon in 1969, was 1,300 times less powerful than an iPhone 5s.

    The 8088 which formed the basis for the IBM PC, released in 1981, 12 years after Apollo 11’s trip to the Moon, had eight times more memory than Apollo’s Guidance Computer (16k, vs the Apollo’s 2k). The IBM PC XT ran at a clock speed of 4.077MHz ( 0.004077 GHz ). The Apollo’s Guidance Computer was 1.024 MHz (0.001240 GHz).

    Never say never.
     
    Last edited: Sep 15, 2018
  13. terrastrife

    terrastrife Member

    Joined:
    Jun 2, 2006
    Messages:
    18,323
    Location:
    ADL/SA The Monopoly State
    Intel are stuck at 14nm because they're trying to advance this whole situation with a new lithography technique.
     
  14. mAJORD

    mAJORD Member

    Joined:
    Jun 4, 2002
    Messages:
    9,160
    Location:
    Griffin , Brisbane
    Which technique is that?
     
  15. Matthew kane

    Matthew kane Member

    Joined:
    Jan 27, 2014
    Messages:
    1,686
    Location:
    Melbourne
    Micro thin silicon. The concept brought out by IBM to make silicon fabs last longer. Think of NAT when we ran out of IPv4 address spacing yonks ago. Last I heard Intel was looking into this direction.
     
  16. dirtyd

    dirtyd Member

    Joined:
    Jan 4, 2006
    Messages:
    3,888
    Location:
    127.0.0.1
    Yeah but jees EUV has been coming for a while now!

    https://www.extremetech.com/computing/276376-intel-reportedly-wont-deploy-euv-lithography-until-2021

    Turns out it's pretty hard to do!

    https://spectrum.ieee.org/semicondu...hography-finally-ready-for-chip-manufacturing

     

Share This Page