What I learned from Kastner's "Artificial Atoms" paper from 1993

These are my notes from reading a paper that was published in Physics Today in 1993 entitled "Artificial Atoms" by Marc Kastner of MIT.

Quantum dots in general

Quantum dots can constrain the motion of electrons to a space that is on the order of 100 nm. Within this space the energy levels of the electrons become quantized similarly to an atom. This is partly why Kastner regards quantum dots as 'artificial atoms'.

The basic concept of a quantum dot is essentially a quantum well that is localized in all three dimensions. A bit of semiconductor is surrounded by some geometry of insulator.

Coulomb blockade

He presents a different perspective on coulomb blockade than what I had learned before. In retrospect, I had focused on different aspects in my previous learnings about the coulomb blockade effect. This analysis focuses on how the electron experiences capacitance with the entire geometry of the setup. The change in capacitance is an energy that needs to be overcome when adding an electron to a quantum dot. This energy change is , so an energy difference between the Fermi level of the source and the Fermi level of the dot that is smaller than this minimum implies that an electron cannot tunnel.

This is of course assuming that the thermal energy kT is smaller than .

Low temperature current flow

Fairly interesting discussion of the reasons why only specific conditions allow for current flow through a quantum dot near zero temperature. He shows that the energy of the state of a charge is given by:

Where E is the energy, Q is the charge, is the gate voltage, and C is the capacitance with regards to the rest of the system. For this analysis he only considers the capacitance with the quantum dot itself. In a real-life situation there would likely be notable contributions from the gate and contacts as well.

is defined as the charge at which energy is minimized. Since the above equation is parabolic, you can imagine that is the charge at which the minimum of the parabola occurs. , just like most charge quantities we talk about, is quantized into units of fundamental charge.

Imagine quantized spots on the parabola of energy separated by one fundamental charge from each other. When there are two degenerate energies corresponding to two spots on a horizontal plane from one another – which might be  and for example, then current can flow at zero temperature. This is because no energy is needed to switch between the states with different numbers of electrons.

Analogy to chemistry

Increasing gate voltage in his example leads to large numbers of electrons being constrained in the quantum dot. As gate voltage increases we also observe changes in the behaviour of these electron system. A direct analogy can be drawn to the chemistry of the periodic table. Using gate voltage, we can transform our quantum dot from one element to another. Just as in chemistry, the electronic behaviour can vary substantially depending on the number of electrons present.

Energy quantization

Energy quantization of the electrons in our artificial atoms. Here Kastner briefly discusses the fact that only a small fraction of electrons in the quantum dot are free. The rest are bound tightly to atoms in the lattice. These free electrons are the ones we are generally talking about when we discuss quantum dots. He briefly describes how different construction techniques tend to allow for different numbers of free electrons to be constrained on the quantum dot. For the purposes of my research, I am already aware that we have a system in which we can easily choose conditions under which the quantum dot(s) will contain zero, one, two, etc free electrons.

It is possible to map out the energy spectrum of a quantum dot by keeping the gate voltage steady and conducting a source-drain bias sweep. If an energy level falls between the Fermi levels of the source and drain, current will flow. If two energy levels fall between, then more current will flow. Some corrections need to be made for the changes in the Fermi energy of the device itself (since it will be somewhere between the source and drain levels), but this is rather straightforward. The energy spectrum can thus be mapped out. Note that this energy spectrum includes multiple electron states as well as excited states of each number of electrons.

Increasing the gate voltage a lot would lead to more electrons being present on the quantum dot. This means that there are more valid energy states to be filled at or below the thermal energy. Thus, it makes sense that Kastner says that increasing the gate voltage leads to a decrease in the energy of confined states.

Screening length and surface charge

It was here that I ran across the term 'screening length'. Since I wasn't 100% sure what it was, I started searching. I quickly found the Wikipedia articles on Debye length and electric field screening. It seems that screening length is referring to the concept also known as the Debye length. Over these distances, plasmas can screen out electric fields. That is, at distances longer than the Debye length, the effect of electric fields is substantially hidden by the movement response of the plasma to compensate.

In the article, Kastner uses the concept of screening length when discussing the all-metal artificial atom. In this case, the metal has a short screening length, so charge added to the quantum dot will reside very close to the surface. This in turn means that the electron-electron interaction is always regardless of the number of electrons that have already been added to the quantum dot. This does not apply to all types of quantum dots. The discussion seems to be limited in this case to the all-metal quantum dots.

Experiments vs predictions

The energy levels of a two-probe quantum dot depend strongly on the applied magnetic field. This is not the case for all types of quantum dots. Level spacings in a two-probe quantum dot are irregular due to the effect of charged impurities in the materials used.

In 1993 it seems that the calculation of a full spectrum was not possible yet. I imagine that soon I will be looking at more recent literature in which this is accomplished. The simplest calculation method is using the simple harmonic oscillator potential. They also assume a non-interacting system where the added electrons don't change the potential shape or strength.

They show at the end of the paper experimental results comparing to their theoretical expectations. Due to some notable discrepancies, they conclude that the constant-interaction model is not quantitatively correct. They claim that this is because it is not self-consistent. I am not totally sure why they claim this. Perhaps it will come to be clear to me in time.

The line shape for electrons on quantum dots is Lorentzian. The following analysis places some constraints on the physical design of the quantum dot such as a minimum width criterion for the barriers.

The last section includes a few of the basic applications that were forseeable at that point in history. It is interesting to me that this article predates the quantum computation fad that has swept much of condensed matter physics and certainly the sub-field of quantum dot physics.

Gordon E. Moore: The Semiconductor Prophet

The insights in Gordon Moore’s world-famous paper, Cramming more components onto integrated circuits, have been validated again and again in the decades since its publication.

Upon reading the paper, many startlingly accurate statements are likely to jump out at you. Startlingly accurate that is - because the date of original publication was April 1965.

Here are some of the prophetic insights that leaped out at me:

He says that memory may be distributed throughout the machine rather than concentrated in a single unit. My primary experience with this phenomenon is in the construction of personal computers. Today’s PCs have hard disks, RAM, and CPU cache in order of increasing speed and decreasing size. Additionally, specialized devices such as video cards are increasingly being fitted with their own RAM and even sometimes flash memory. Memory accessibility has proven to be one of the salient difficulties of computer design. Spreading the memory around has made even faster operations possible.

He accurately predicted that semiconductor integrated circuits will come to dominate electronics. The rise of the PC age is a good indication of this domination. Today we are beginning to see semiconductor integrated circuits in pretty much anything that has electric power flowing through it.

His ‘day of reckoning’ thing sounds a lot like the frequency wall that we hit in the early 2000’s. Since the early 2000’s, clock frequencies in mainstream computers have not increased. Today, our top CPU manufacturers focus on improving performance per clock cycle and per watt of power.

He says that we may find it more economical to build larger systems out of smaller functions. Look at our multi-core personal computers, computer clusters, cell computers, and cloud computing. As a consequence of the frequency wall and economics, today’s supercomputers are dominated by multicore and multiprocessor systems. In the last few years we have also been watching the rise of the cloud computing system. Using the power of the Internet, staggeringly huge supercomputers are created out of smaller cells linked to each other through the network. We have only just scratched the surface of how cloud computing is going to change the face of our computing world.

Lastly, this is the piece in which Moore first described the economic relationship that would come to be known as Moore’s Law. His observations are often misquoted and misinterpreted in popular media. He identified a definite trend in the cost of production of integrated circuit components and the number of components per integrated circuit. This has been extrapolated by later thinkers into a plethora of versions of “Moore’s Law” that are claimed to be representative. The accuracy of the later versions is highly questionable. However, Moore’s actual prediction has been remarkably accurate for over four decades.

General Trends in the History of Circuits

The material for the post is based primarily on a lecture by Thomas Szkopek in the class Nanoelectronic Devices that I am taking at McGill.

Electrons are very light and have a definite (constant) charge. Charge to mass ratio is a primary reason why electrons are better than nucleons or mechanical devices for the creation of semiconductor electronics.

What is a transistor? It is ‘transferred resistance’. We can control the resistance of a lump of material. By controlling the resistance we can control to flow of current through a semiconductor. This is the primary basis for decision making at the circuit level.

We have built more transistors than anything else? (is this true or is it computer bits (like those in a hard drive)? not sure what he said).

In semiconductors, germanium was eventually replaced by silicon. Why?

Not because of cost or availability (initially). It was primarily a question of easier fabrication. The key facet was the quality of the oxide you can grow on the silicon rather than on the germanium.

There has been a lot of talk for years about how this or that material was going to replace silicon. None of them have yet done so because silicon is really well established and quite good at what it does. “Gallium Arsenide is the material of the future and it always will be.” - Szkopek.

Why smaller and smaller integrated circuits? By making the parts smaller and closer together, we can eliminate a lot of the resistances, capacitances and inductances as well as reducing our overall material usage. This should mean cheaper integrated circuits that require less materials to create and less power to run.

Gordon Moore

Gordon Moore was a chemist by training but was also one of the most successful electronic engineers of all time. What was Moore’s major contribution? He figured out how to grow high quality oxide on silicon.

As a computer scientist, I am well aware of some of the many different ways Moore’s Law has been mis-represented. So what is it actually? We read “Cramming More Components onto Integrated Circuits” the famous paper that Moore wrote in 1965, from which ‘Moore’s Law’ was extrapolated.

Moore's Law: Relative manufacturing cost per component and number of components per integrated circuit (diagram in paper, or on the Wikipedia article).

Cost increase as we move to the right is primarily because fewer chips are successfully made when we try to jam more components onto the wafer. There is a minimum cost for each technology level.

Atomic Scale

What happens when transistor dimensions approach 10nm? We are looking at atomic scale.

He then showed us some pictures taken with scanning-tunneling microscopy of a tiny surface with iron atoms on it. The atoms were physically arranged into a circle. As this symmetry is created you can see the creation of a symmetric pattern of standing waves of electron position in the center. This is an incredible (and graphic) demonstration of quantum mechanics in action. The pictures are from this paper on “Confinement of electrons to quantum corrals on a metal surface.”

One of Szkopek’s main points with regards to these photos is that atomic scale disorder is going to be present when we are working at such ridiculously small scales. Some of this disorder can be dealt with through more careful manufacturing and usage techniques, but we are definitely getting into the realm where we are starting to touch upon the omnipresent low-level disorder of the universe.

Szkopek says that Intel is currently using a 1.2nm oxide layer. That is 4 layers of atomic oxide. We are at the atomic scale, and will have to be considering the facts that govern it.

In closing, Szkopek talked a bit about how the nice formulas we tend to see in the theoretical sections of courses devolve into complicated, ugly looking things when we try to do real problems. There is a tendency to term this “things getting crazy”. Szkopek makes his point clear when he closed the class with: “Things don't get crazy, they get physical!”

First semester of my Physics Masters

Today was the first day of my Physics Masters at McGill. Yoga in the morning was followed with breakfast and then the 20 minute walk to campus.

I jumped through some bureaucratic hoops to become setup as a grad student. I now have a desk in an office on the fourth floor! The room even has a window! I feel like I have moved up a lot in the world since my days of working in the cavernous underbelly of the Lab Building at the University of Regina.

My supervisors are Michael Hilke of McGill and Guy Austing of the National Research Council.

My course list for this semester is:

  • Physics 659 - Experimental Condensed Matter - Taught cooperatively, but overseen by Dominic Ryan.
  • Physics 634 - Seminar in Advanced Materials - Taught by my supervisor Michael Hilke.
  • Electrical Engineering 535 - Nanoelectronic Devices - Taught by Thomas Szkopek.

Today I had a session of all three of these classes! Luckily since it is the first day the lectures were not overly intense. Even so, I am feeling a bit shell-shocked. In the next several hours, and perhaps stretching into tomorrow, I will be attempting to compile my learnings from today into a comprehensible format. Stay tuned.