tor speeds,” Conte explains. This created a “wire-delay wall” that engineers
circumvented by using parallelism behind the scenes. Simply put: the technology extracted and executed instructions in parallel, but independent,
groups. This was known as the “
superscalar era,” and the Intel Pentium Pro
microprocessor, while not the first system to use this method, demonstrated
the success of this approach.
Around the mid-2000s, engineers
hit a power wall. Because the power
in CMOS transistors is proportional
to the operating frequency, when the
power density reached 200W/cm2,
cooling became imperative. “You can
cool the system, but the cost of cooling something hotter than 150 watts
resembles a step function, because 150
watts is about the limit for relatively
inexpensive forced-air cooling technology,” Conte explains. The bottom line?
Energy consumption and performance
would not scale in the same way. “We
had been hiding the problem from programmers. But now we couldn’t do that
with CMOS,” he adds.
No longer could engineers pack
more transistors onto a wafer with the
same gains. This eventually led to reducing the frequency of the processor
core and introducing multicore processors. Still, the problem didn’t go
away. As transistors became smaller—
hitting approximately 65nm in 2006
—performance and economic gains
continued to subside, and as nodes
dropped to 22nm and 14nm, the problem grew worse.
What is more, all of this has contributed to fabrication facilities becoming incredibly expensive to build, and
semiconductors becoming far more expensive to manufacture. Today, there
are only four major semiconductor
manufacturers globally: Intel, TSMC,
GlobalFoundries, and Samsung. That
is down from nearly two dozen two decades ago.
To be sure, the semiconductor in-
dustry is approaching the physical
limitations of CMOS transistors. Al-
though alternative technologies are
now in the research and development
stage—including carbon nanotubes
and tunneling field effect transistors
(TFETs)—there is no evidence these
next-gen technologies will actually pay
off in a major way. Even if they do usher
in further performance gains, they can
at best stretch Moore’s Law by a gen-
eration or two.
In fact, industry groups such as the
IEEE International Roadmap of Devic-
es and Systems (IRDS) initiative have
reported it will be nearly impossible to
shrink transistors further by 2023.
Observes Michael Chudzik, a senior
director at Applied Materials: “
Semiconductor technology is challenged
on many fronts. There are technical
and engineering challenges, economic
challenges because we’re seeing fewer
industry players, and fundamental
changes in the way people use computing devices” such as smartphones, as
well as cloud computing, and the Internet of Things (Io T), which place entire
different demands on ICs. This makes
the methods of the past less desirable
in the future. “We are entering a different era,” Rabaey observes.
Designs on the Future
Mapping out a future for integrated
circuits and computing is paramount.
One option for advancing chip performance is the use of different materials,
Chudzik says. For instance, researchers
are experimenting with cobalt to replace
tungsten and copper in order to increase
the volume of the wires, and studying alternative materials for silicon. These include Ge, SiGE and III-V materials such
as gallium arsenide and gallium indium
arsenide. However, these materials present performance and scaling challenges
and, even if those problems can be addressed, they would produce only incre-
“There are technical
we’re seeing fewer
industry players, and
in the way people use
mental gains that would tap out in the
Faced with the end of Moore’s Law,
researchers are also focusing attention
on new and sometimes entirely different approaches. One of the most promising options is stacking components
and scaling from today’s 2D ICs to 3D
designs, possibly by using nanowires.
“By moving into the third dimension
and stacking memory and logic, we
can create far more function per unit
volume,” Rabaey explains. Yet, for now,
3D chip designs also run into challenges, particularly in terms of cooling. The
devices have less surface volume as engineers stack components. As a result,
“You suddenly have to do processing at
a lower temperature or you damage the
lower layers,” he notes.
Consequently, a layered 3D design, at
least for now, requires a fundamentally
different architecture. “Suddenly, in order to gain denser connectivity, the traditional approach of having the memory and processor separated doesn’t
make sense. You have to rethink the way
you do computation,” Rabaey explains.
It’s not an entirely abstract proposition.
“The advantages that some applications
tap into—particularly machine learning
and deep learning, which require dense
integration of memory and logic—go
away.” Adding to the challenge: a 3D design increases the risk of failures within
the chip. “Producing a chip that functions with 100% integrity is impossible.
The system must be fail-tolerant and
deal with errors,” he adds.
Regardless of the approach and
the combination of technologies, researchers are ultimately left with no
perfect option. Barring a radical breakthrough, they must rethink the fundamental way in which computing and
processing take place.
Conte says two possibilities exist
beyond pursuing the current technology direction.
One is to make radical changes, but
limit these changes to those that happen “under the covers” in the microar-chitecture. In a sense, this is what took
place in 1995, except “today we need
to use more radical approaches,” he
says. For servers and high-performance
computing, for example, ultra-low-temperature superconducting is being
advanced as one possible solution. At
present, the U.S. Intelligence Advanced