## Musings inspired by Ashoori’s article “Electrons in Artificial Atoms”

These notes come primarily from reading an article by R.C. Ashoori of MIT which was published in the journal Nature, Volume 379, February 1996. The article is entitled “Electrons in Artificial Atoms“.

In an artificial atom, the effects of electron-electron interaction are more important than in a normal atom! This is because orbital energies are far lower in artificial atoms than in real ones. The opposing effect of the spreading out of the electrons in space is not as large. Thus the relative importance of electron-electron interactions increases.

Energy resolution of new spectroscopic techniques is only limited by the sample’s temperature.

Ashoori describes a setup where a QD is close enough to one contact (capacitor plate) that an electron would be able to quantum tunnel between them. The other capacitor plate is too far away to tunnel. When an electron is successfully added to the QD, you can detect a tiny change in charge (typically about half an electron charge) on the surface of the farther capacitor plate.

A neat extension of this idea is to attach an AC current to the DC gate voltage that is driving electrons onto the QD. This makes it possible for an oscillation to occur at specific DC gate voltages. This is when the electron tunnels to and from the QD with each oscillation of the AC part of the gate voltage. This allows for synchronous detection by the farther capacitor plate as its charge changes slightly in time with these oscillations. This is known as single-electron capacitance spectroscopy (SECS).

Gated transport spectroscopy (GTS) seems to be what we do with our double quantum dots. We maintain a voltage difference between the source and drain contacts, and thus we can measure the current flow changing with changing conditions such as a changing gate voltage. When this article was written (1996), no one had yet successfully conducted GTS with fewer than about 10 electrons on the dot.

There are two main effects that make it more difficult to add extra electrons to a QD. The first is electron-electron interactions. They obviously push each other away. This is called the charging energy. Then there is the quantum energy levels. In order for an electron to be present in the dot, it needs to be occupying a quantum level. Due to the Pauli exclusion principle, it is not possible for more than two electrons to occupy the exact same quantum level. The factor of two is due to different possible spins of the electrons. This review claims that the charging energy is about five times the quantum level spacing for the samples described in the paper.

There is a geometric factor that connects the value of the gate voltage with the actual amount of energy needed to add an electron to the dot. This paper seems to be claiming that they simply use the geometry of the sample to estimate this. In the case of their perfectly symmetric doubly-contacted QD, they claim that this geometric factor is 0.5.

## Modeling

The author sketches out the basics of the parabolic potential well assumption. They assume that the z-direction is completely constrained, and that the x and y directions are governed by the parabolic potential well. The potential is circularly symmetric.

I have also read elsewhere that this model matches up rather well with observations. Even in 1996 this was apparently already known.

Introduction of a constant magnetic field to the analysis breaks the degeneracy in the quantum number l. Now positive and negative l’s have have slightly different energies due to the contributions of the magnetic moment interactions with the magnetic field. Note that we are talking about the magnetic moment created by the evolution of the electron’s wavefunction such that the electron can be considered to be moving in a circle around the center of the potential well. Magnetic field applied along the z axis enhances confinement in the dot. The magnetic field also introduces Zeeman (spin) splitting due to the magnetic moments of the electrons.

With the introduction of the magnetic field to the discussion, the author began to refer to the quantum levels as Landau levels. Strong magnetic fields can cause only one side of the level, let’s say the side with positive l, to be populated.

There is an interesting plot demonstrating the zig-zag effect as electrons end up populating different levels as the magnetic field strength increases. At some point, the zig zag stops because all electrons are in the lowest Landau level (and on one side of the l range I believe). This is very striking when seen in a plot.

Another important effect of increasing magnetic field is the fact that all of the radii of the different l levels shrink. This makes sense in light of the observation we made above that the magnetic field increases the confinement strength.

Example with 2 electrons in ground singlet state (l = 1, s = +-1). With increasing B-field, it is possible to increase the Zeeman energy enough that one of the electrons gets promoted to l = 1.

Interactions of magnetic moments alone is not sufficient to produce the observed spin flips and l transitions. We must take into account coulomb interaction between the electrons in order to get the right answer. As the magnetic field increases, the l radius decreases and eventually it becomes possible for an electron to jump into the l = -1 state from the l = 0 state.

Once in the lowest Landau level, more changes can still occur. The lowest energy of all the electrons if they are in the lowest Landau level would nominally be when they are paired off with each pair having opposing spins to each other. As the magnetic field increases however, eventually higher-l states become lower-energy than either the spin up or spin down states, depending on the direction of the magentic field. This means that we will continue to see bumps in the spectrum as electrons will flip spins and move to more distance l values. Self-consistent calculations can reproduce some of these effects, but they seem to overstate the number of flips that happen at low field, and underestimate the number of flips at high field. Something is obviously still missing.

The Hartree-Fock technique takes into account the repulsion between many different electron wavefunctions. It reproduces much of the correct behaviour. An interesting result is that there is actually a short-range attraction force acting on electrons that are in different l bands. Adjacent bands tend to be preferentially populated. This can be compared to the fact that electrons in the same l band tend to be on opposite sides of the dot from one another. By moving to the higher l bands, it seems that the electrons can both be in a lower energy state and be closer to one another, an apparent paradox.

When the magnetic field becomes even higher, eventually it becomes energetically favourable for gaps to form in the l spectrum. The lowerest energy states involving gaps typically involve the gaps being adjacent to one another. Thus we don’t end up with a smattering of l gaps throughout our states. We end up with one big block of empty l’s.

It is known however that the Hartree-Fock calculations leave something out. They do not take into account the known electron correlation. In this area, some other techniques such as “exact diagonalization” seem to be better. However, the authors do not mention a successful combination of these factors into one theoretical model.

Their final section mentions some exciting further work that is being pursued, or will likely be pursued, in the field of quantum dots. One of the more interesting points for me is that a single-electron transistor obviously has a ‘fan-out’ problem. That is, normally the result of a piece of digital logic can be used to set off a cascade of other logic operations. This is obviously difficult if the result of your digital logic operation is the movement of a single electron. It seems however that people are finding ways around this. Perhaps I will find out more about this in the future.

## What I learned from Kastner’s “Artificial Atoms” paper from 1993

These are my notes from reading a paper that was published in Physics Today in 1993 entitled “Artificial Atoms” by Marc Kastner of MIT.

### Quantum dots in general

Quantum dots can constrain the motion of electrons to a space that is on the order of 100 nm. Within this space the energy levels of the electrons become quantized similarly to an atom. This is partly why Kastner regards quantum dots as ‘artificial atoms’.

The basic concept of a quantum dot is essentially a quantum well that is localized in all three dimensions. A bit of semiconductor is surrounded by some geometry of insulator.

He presents a different perspective on coulomb blockade than what I had learned before. In retrospect, I had focused on different aspects in my previous learnings about the coulomb blockade effect. This analysis focuses on how the electron experiences capacitance with the entire geometry of the setup. The change in capacitance is an energy that needs to be overcome when adding an electron to a quantum dot. This energy change is $$e^2/C$$, so an energy difference between the Fermi level of the source and the Fermi level of the dot that is smaller than this minimum implies that an electron cannot tunnel.

This is of course assuming that the thermal energy kT is smaller than $$e^{2}/C$$.

### Low temperature current flow

Fairly interesting discussion of the reasons why only specific conditions allow for current flow through a quantum dot near zero temperature. He shows that the energy of the state of a charge is given by:

$$E = Q V_G + Q^2 / 2C$$

Where E is the energy, Q is the charge, $$V_G$$ is the gate voltage, and C is the capacitance with regards to the rest of the system. For this analysis he only considers the capacitance with the quantum dot itself. In a real-life situation there would likely be notable contributions from the gate and contacts as well.

$$Q_0$$ is defined as the charge at which energy is minimized. Since the above equation is parabolic, you can imagine that $$Q_0$$ is the charge at which the minimum of the parabola occurs. $$Q_0$$, just like most charge quantities we talk about, is quantized into units of fundamental charge.

Imagine quantized spots on the parabola of energy separated by one fundamental charge from each other. When there are two degenerate energies corresponding to two spots on a horizontal plane from one another – which might be $$Q_1 = -N e$$  and $$Q_2 = -(N-1) e$$ for example, then current can flow at zero temperature. This is because no energy is needed to switch between the states with different numbers of electrons.

### Analogy to chemistry

Increasing gate voltage in his example leads to large numbers of electrons being constrained in the quantum dot. As gate voltage increases we also observe changes in the behaviour of these electron system. A direct analogy can be drawn to the chemistry of the periodic table. Using gate voltage, we can transform our quantum dot from one element to another. Just as in chemistry, the electronic behaviour can vary substantially depending on the number of electrons present.

### Energy quantization

Energy quantization of the electrons in our artificial atoms. Here Kastner briefly discusses the fact that only a small fraction of electrons in the quantum dot are free. The rest are bound tightly to atoms in the lattice. These free electrons are the ones we are generally talking about when we discuss quantum dots. He briefly describes how different construction techniques tend to allow for different numbers of free electrons to be constrained on the quantum dot. For the purposes of my research, I am already aware that we have a system in which we can easily choose conditions under which the quantum dot(s) will contain zero, one, two, etc free electrons.

It is possible to map out the energy spectrum of a quantum dot by keeping the gate voltage steady and conducting a source-drain bias sweep. If an energy level falls between the Fermi levels of the source and drain, current will flow. If two energy levels fall between, then more current will flow. Some corrections need to be made for the changes in the Fermi energy of the device itself (since it will be somewhere between the source and drain levels), but this is rather straightforward. The energy spectrum can thus be mapped out. Note that this energy spectrum includes multiple electron states as well as excited states of each number of electrons.

Increasing the gate voltage a lot would lead to more electrons being present on the quantum dot. This means that there are more valid energy states to be filled at or below the thermal energy. Thus, it makes sense that Kastner says that increasing the gate voltage leads to a decrease in the energy of confined states.

### Screening length and surface charge

It was here that I ran across the term ‘screening length’. Since I wasn’t 100% sure what it was, I started searching. I quickly found the Wikipedia articles on Debye length and electric field screening. It seems that screening length is referring to the concept also known as the Debye length. Over these distances, plasmas can screen out electric fields. That is, at distances longer than the Debye length, the effect of electric fields is substantially hidden by the movement response of the plasma to compensate.

In the article, Kastner uses the concept of screening length when discussing the all-metal artificial atom. In this case, the metal has a short screening length, so charge added to the quantum dot will reside very close to the surface. This in turn means that the electron-electron interaction is always $$e^2/C$$ regardless of the number of electrons that have already been added to the quantum dot. This does not apply to all types of quantum dots. The discussion seems to be limited in this case to the all-metal quantum dots.

### Experiments vs predictions

The energy levels of a two-probe quantum dot depend strongly on the applied magnetic field. This is not the case for all types of quantum dots. Level spacings in a two-probe quantum dot are irregular due to the effect of charged impurities in the materials used.

In 1993 it seems that the calculation of a full spectrum was not possible yet. I imagine that soon I will be looking at more recent literature in which this is accomplished. The simplest calculation method is using the simple harmonic oscillator potential. They also assume a non-interacting system where the added electrons don’t change the potential shape or strength.

They show at the end of the paper experimental results comparing to their theoretical expectations. Due to some notable discrepancies, they conclude that the constant-interaction model is not quantitatively correct. They claim that this is because it is not self-consistent. I am not totally sure why they claim this. Perhaps it will come to be clear to me in time.

The line shape for electrons on quantum dots is Lorentzian. The following analysis places some constraints on the physical design of the quantum dot such as a minimum width criterion for the barriers.

The last section includes a few of the basic applications that were forseeable at that point in history. It is interesting to me that this article predates the quantum computation fad that has swept much of condensed matter physics and certainly the sub-field of quantum dot physics.

## Gordon E. Moore: The Semiconductor Prophet

The insights in Gordon Moore’s world-famous paper, Cramming more components onto integrated circuits, have been validated again and again in the decades since its publication.

Upon reading the paper, many startlingly accurate statements are likely to jump out at you. Startlingly accurate that is – because the date of original publication was April 1965.

Here are some of the prophetic insights that leaped out at me:

He says that memory may be distributed throughout the machine rather than concentrated in a single unit. My primary experience with this phenomenon is in the construction of personal computers. Today’s PCs have hard disks, RAM, and CPU cache in order of increasing speed and decreasing size. Additionally, specialized devices such as video cards are increasingly being fitted with their own RAM and even sometimes flash memory. Memory accessibility has proven to be one of the salient difficulties of computer design. Spreading the memory around has made even faster operations possible.

He accurately predicted that semiconductor integrated circuits will come to dominate electronics. The rise of the PC age is a good indication of this domination. Today we are beginning to see semiconductor integrated circuits in pretty much anything that has electric power flowing through it.

His ‘day of reckoning’ thing sounds a lot like the frequency wall that we hit in the early 2000’s. Since the early 2000’s, clock frequencies in mainstream computers have not increased. Today, our top CPU manufacturers focus on improving performance per clock cycle and per watt of power.

He says that we may find it more economical to build larger systems out of smaller functions. Look at our multi-core personal computers, computer clusters, cell computers, and cloud computing. As a consequence of the frequency wall and economics, today’s supercomputers are dominated by multicore and multiprocessor systems. In the last few years we have also been watching the rise of the cloud computing system. Using the power of the Internet, staggeringly huge supercomputers are created out of smaller cells linked to each other through the network. We have only just scratched the surface of how cloud computing is going to change the face of our computing world.

Lastly, this is the piece in which Moore first described the economic relationship that would come to be known as Moore’s Law. His observations are often misquoted and misinterpreted in popular media. He identified a definite trend in the cost of production of integrated circuit components and the number of components per integrated circuit. This has been extrapolated by later thinkers into a plethora of versions of “Moore’s Law” that are claimed to be representative. The accuracy of the later versions is highly questionable. However, Moore’s actual prediction has been remarkably accurate for over four decades.

## Scratching the surface of surface science

The primary resource for this material was a lecture by Peter Grütter.

### Why has it been reseached?

The semiconductor industry is probably the primary reason why surface science has received so much attention. It is well established in industry and in science. The development of the tools and techniques has been driven primarily by the semiconductor industry.

Another major driver is catalysis. This is the area of study of how to catalyse reactions/processes so that they can happen more quickly and/or with lower energy requirements. Surfaces can be extremely important for catalysis. With regards to catalysis, the sites of interest on the surface are actually the kinks and defects rather than the flat surface itself.

Small features can be of primary importance in many of these condensed matter fields of study. Dr. Grütter calls attention to the fact that in the semiconductor industry, the doping atoms among the silicon of crucial to the operation of the devices.

### Introduction to Surface Science

In this class, we will be talking primarily about solid-vacuum interfaces rather than solid-liquid interfaces. We are building on the knowledge we gained in the introductory sections on vacuum systems.

Surfaces are 3 dimensional. They are not merely two-dimensional planes. They are a layer of transition from bulk conditions to vacuum conditions.

The dipole layer is an interesting physical phenomenon that takes place at the surface of a material. Electron density does not drop off to zero once we are outside the surface atoms. It tapers off, becoming negligible some small distance away from the surface. This distance is on the order of one fermi wavelength, which would vary depending on the material.

So some negative charge ends up outside the surface. The only picture I can find of this effect online is here, even though it is given in terms of electrostatic potential rather than electron density. Rather than a smooth drop in electron density, we end up with a periodic (on the scale of fermi wavelengths) charge density as we look into the surface. Thus, just inside the surface we actually have a higher electron density than we do further into the bulk of the material. This interface between the high internal electron density and the low external electron density is called the dipole layer.

The dipole layer can stop atoms from diffusing out of the surface. As they diffuse towards the surface, they suddenly come up against a larger density of electrons, which push them away. In the image linked above, the diffusion would be taking place from right to left. The lower potential pushes back on the atom’s electrons, causing it to have more difficulty getting through the surface than it had moving throughout the bulk of the material.

### A few Observations

As we already know, taking an electron out of the surface will take some energy. The amount of energy depends on several things such as strength of bond to ion core, interaction of electrons with each other, etc. This is known as the work function. There are two versions, one considers the energy needed to move the electron to just outside the solid surface, while the other considers the move of the electron to infinity).

The work function depends primarily on the dipole layer. Can be different work functions for different surfaces (faces) of crystals! Depends on the orientation of the atoms. Work function also depends on step density. What is a step? Consider a perfect planar surface of atoms. Now consider adding another layer to half the surface, so that there is a ‘step’ up to the second layer. There can be many such steps. As a heavy and long-time computer user, one of the first things I visualized was the fact that an angled line on a computer monitor is not smooth, it has ‘steps’ made of straight sections. Similarly with a surface viewed at the nano scale. The closer the steps are to each other, the higher the step density. Step density changes the work function because of the details of the dipole layer at each step.

Question that you must learn to ask yourself: You must ask yourself if what you are studying is affected by small defects in the system. In the history of science this has been overlooked many times. How big of an effect can these things have? Well, it turns out that a 5% difference in work function for Tungsten can be created by step density. Even more astounding, a 1 eV difference in work function can be measured depending on tungsten orientation! 1 eV at the nanometer scale indicates a huge difference in electric field. These hugely different electric fields can help explain why such small defects can often have a large effect on chemical reaction rates via catalysis.

### Surface Energy

The simplest way to explain surface energy that I can find is from Wikipedia, where it is stated that surface energy can be defined as the excess energy at the surface of a material compared to the bulk of the material.

In class, the first thing we discuss about surface energy is the jelly model (jellium), which is quite similar to the plum pudding model. It feels almost heretical to be talking about this, since this class is in a building named after the man who proved that the plum pudding model was wrong (Ernest Rutherford).

We can calculate surface energy for jellium quite easily. This tends to agree with experiment at low densities, then eventually becomes very broken at higher densities. The more complicated (and accurate) models are quite difficult to calculate. Additionally, the surface energy is very hard to measure experimentally.

Surface energy is crucial to our understanding of many physical aspects of surfaces. For example, it helps us understand how we can grow materials on other materials. Will we get island growth or layer-by-layer growth?

One of the reasons this is difficult to model correctly is that the electron correlation effect between d-orbitals are difficult to calculate. This is why estimating the surface energy of elements such as gold, iron, etc involves very complicated calculations.

It turns out that finding the minima of surface energy will show us the shape of an equilibrium crystal. Real crystals may not completely agree because our physical crystal growth is not perfect. In closing, surface energy is important for studying crystal shapes as well as understanding what materials we can grow on what substrates and how they grow.

### Surface Structure

There are three major ways in which the surface structure can be very different from the bulk structure.

#### Relaxation

The spacing between surface atoms and second layer is often not equal to the distance between the 2nd and third. Surface atoms tend to get pulled in little bit because they do not have a bond on one side. This is true for both covalent bonds and metals. This relaxation may be up to three layers deep (distances grow towards lattice standard as we go deeper).

#### Reconstruction

Where the surface structure is different from the bulk. For example, there might be more atoms on the surface layer than in a bulk layer. They may be connected to each other at different angles. Thus, the unit cell of the surface crystal can be very different from the bulk unit cell. We actually cannot calculate some of these structures because they are too complicated.

Related aside, Dr. Grütter began talking about silicon (111). He said, “This was the Guinea Pig or Drosphila of surface science for a number of years.” Apparently about 20 years of work went into understand silicon (111), which has what is called a “7×7 reconstruction” comprised of 64 atoms in 4 layers. The problem was eventually solved by a combination of scanning tunneling microscopy and diffraction studies.

Aside from the aside: This is not the industrially relevant silicon unit cell. That role is filled by silicon (001). Dr. Grütter says that it is very important that one can grow very smooth layers of oxide on silicon (001).

Aside3: Dr. Grütter says that silicon cannot be used as a photon emitting material very well because this would violate momentum conservation. However, gallium arsenide is capable of being a useful photon emitting material.

#### Composition

Most materials are an alloy, there are multiple constituent elements. Will the surface layer be the same composition as a bulk layer? It turns out that often surface layers are usually completely different than the bulk in terms of composition. Surface might be all of one element. Second layer might be a split of some kind. Third layer might be a different split.

This fact has huge implications for surface characteristics such as the ability to catalyze reactions, corrosion resistance, hardness, etc.

### Surface Complexity

We tend to think of surfaces as atomically flat, but they are not. A decent flat surface might have truly flat areas that are 10nm in length. We might be able to get 100nm of nice flat area if we try really hard and employ a lot of tricks.

Some of the forms of imperfections in a surface are:

1. kinks
2. terraces
3. vacancies
5. monoatomic steps

A fair amount of research has been done on the subject of the effects of these imperfections in surfaces. For example, we have learned that electromigration is affected. Defects can backscatter electrons. This can become important when the surface atoms are a notable number of total atoms in the wire, which happens at the nano scale.

## General Trends in the History of Circuits

The material for the post is based primarily on a lecture by Thomas Szkopek in the class Nanoelectronic Devices that I am taking at McGill.

Electrons are very light and have a definite (constant) charge. Charge to mass ratio is a primary reason why electrons are better than nucleons or mechanical devices for the creation of semiconductor electronics.

What is a transistor? It is ‘transferred resistance’. We can control the resistance of a lump of material. By controlling the resistance we can control to flow of current through a semiconductor. This is the primary basis for decision making at the circuit level.

We have built more transistors than anything else? (is this true or is it computer bits (like those in a hard drive)? not sure what he said).

In semiconductors, germanium was eventually replaced by silicon. Why?

Not because of cost or availability (initially). It was primarily a question of easier fabrication. The key facet was the quality of the oxide you can grow on the silicon rather than on the germanium.

There has been a lot of talk for years about how this or that material was going to replace silicon. None of them have yet done so because silicon is really well established and quite good at what it does. “Gallium Arsenide is the material of the future and it always will be.” – Szkopek.

Why smaller and smaller integrated circuits? By making the parts smaller and closer together, we can eliminate a lot of the resistances, capacitances and inductances as well as reducing our overall material usage. This should mean cheaper integrated circuits that require less materials to create and less power to run.

### Gordon Moore

Gordon Moore was a chemist by training but was also one of the most successful electronic engineers of all time. What was Moore’s major contribution? He figured out how to grow high quality oxide on silicon.

As a computer scientist, I am well aware of some of the many different ways Moore’s Law has been mis-represented. So what is it actually? We read “Cramming More Components onto Integrated Circuits” the famous paper that Moore wrote in 1965, from which ‘Moore’s Law’ was extrapolated.

Moore’s Law: Relative manufacturing cost per component and number of components per integrated circuit (diagram in paper, or on the Wikipedia article).

Cost increase as we move to the right is primarily because fewer chips are successfully made when we try to jam more components onto the wafer. There is a minimum cost for each technology level.

### Atomic Scale

What happens when transistor dimensions approach 10nm? We are looking at atomic scale.

He then showed us some pictures taken with scanning-tunneling microscopy of a tiny surface with iron atoms on it. The atoms were physically arranged into a circle. As this symmetry is created you can see the creation of a symmetric pattern of standing waves of electron position in the center. This is an incredible (and graphic) demonstration of quantum mechanics in action. The pictures are from this paper on “Confinement of electrons to quantum corrals on a metal surface.”

One of Szkopek’s main points with regards to these photos is that atomic scale disorder is going to be present when we are working at such ridiculously small scales. Some of this disorder can be dealt with through more careful manufacturing and usage techniques, but we are definitely getting into the realm where we are starting to touch upon the omnipresent low-level disorder of the universe.

Szkopek says that Intel is currently using a 1.2nm oxide layer. That is 4 layers of atomic oxide. We are at the atomic scale, and will have to be considering the facts that govern it.

In closing, Szkopek talked a bit about how the nice formulas we tend to see in the theoretical sections of courses devolve into complicated, ugly looking things when we try to do real problems. There is a tendency to term this “things getting crazy”. Szkopek makes his point clear when he closed the class with: “Things don’t get crazy, they get physical!”