What I learned from Kastner's "Artificial Atoms" paper from 1993

These are my notes from reading a paper that was published in Physics Today in 1993 entitled "Artificial Atoms" by Marc Kastner of MIT.

Quantum dots in general

Quantum dots can constrain the motion of electrons to a space that is on the order of 100 nm. Within this space the energy levels of the electrons become quantized similarly to an atom. This is partly why Kastner regards quantum dots as 'artificial atoms'.

The basic concept of a quantum dot is essentially a quantum well that is localized in all three dimensions. A bit of semiconductor is surrounded by some geometry of insulator.

Coulomb blockade

He presents a different perspective on coulomb blockade than what I had learned before. In retrospect, I had focused on different aspects in my previous learnings about the coulomb blockade effect. This analysis focuses on how the electron experiences capacitance with the entire geometry of the setup. The change in capacitance is an energy that needs to be overcome when adding an electron to a quantum dot. This energy change is , so an energy difference between the Fermi level of the source and the Fermi level of the dot that is smaller than this minimum implies that an electron cannot tunnel.

This is of course assuming that the thermal energy kT is smaller than .

Low temperature current flow

Fairly interesting discussion of the reasons why only specific conditions allow for current flow through a quantum dot near zero temperature. He shows that the energy of the state of a charge is given by:

Where E is the energy, Q is the charge, is the gate voltage, and C is the capacitance with regards to the rest of the system. For this analysis he only considers the capacitance with the quantum dot itself. In a real-life situation there would likely be notable contributions from the gate and contacts as well.

is defined as the charge at which energy is minimized. Since the above equation is parabolic, you can imagine that is the charge at which the minimum of the parabola occurs. , just like most charge quantities we talk about, is quantized into units of fundamental charge.

Imagine quantized spots on the parabola of energy separated by one fundamental charge from each other. When there are two degenerate energies corresponding to two spots on a horizontal plane from one another – which might be  and for example, then current can flow at zero temperature. This is because no energy is needed to switch between the states with different numbers of electrons.

Analogy to chemistry

Increasing gate voltage in his example leads to large numbers of electrons being constrained in the quantum dot. As gate voltage increases we also observe changes in the behaviour of these electron system. A direct analogy can be drawn to the chemistry of the periodic table. Using gate voltage, we can transform our quantum dot from one element to another. Just as in chemistry, the electronic behaviour can vary substantially depending on the number of electrons present.

Energy quantization

Energy quantization of the electrons in our artificial atoms. Here Kastner briefly discusses the fact that only a small fraction of electrons in the quantum dot are free. The rest are bound tightly to atoms in the lattice. These free electrons are the ones we are generally talking about when we discuss quantum dots. He briefly describes how different construction techniques tend to allow for different numbers of free electrons to be constrained on the quantum dot. For the purposes of my research, I am already aware that we have a system in which we can easily choose conditions under which the quantum dot(s) will contain zero, one, two, etc free electrons.

It is possible to map out the energy spectrum of a quantum dot by keeping the gate voltage steady and conducting a source-drain bias sweep. If an energy level falls between the Fermi levels of the source and drain, current will flow. If two energy levels fall between, then more current will flow. Some corrections need to be made for the changes in the Fermi energy of the device itself (since it will be somewhere between the source and drain levels), but this is rather straightforward. The energy spectrum can thus be mapped out. Note that this energy spectrum includes multiple electron states as well as excited states of each number of electrons.

Increasing the gate voltage a lot would lead to more electrons being present on the quantum dot. This means that there are more valid energy states to be filled at or below the thermal energy. Thus, it makes sense that Kastner says that increasing the gate voltage leads to a decrease in the energy of confined states.

Screening length and surface charge

It was here that I ran across the term 'screening length'. Since I wasn't 100% sure what it was, I started searching. I quickly found the Wikipedia articles on Debye length and electric field screening. It seems that screening length is referring to the concept also known as the Debye length. Over these distances, plasmas can screen out electric fields. That is, at distances longer than the Debye length, the effect of electric fields is substantially hidden by the movement response of the plasma to compensate.

In the article, Kastner uses the concept of screening length when discussing the all-metal artificial atom. In this case, the metal has a short screening length, so charge added to the quantum dot will reside very close to the surface. This in turn means that the electron-electron interaction is always regardless of the number of electrons that have already been added to the quantum dot. This does not apply to all types of quantum dots. The discussion seems to be limited in this case to the all-metal quantum dots.

Experiments vs predictions

The energy levels of a two-probe quantum dot depend strongly on the applied magnetic field. This is not the case for all types of quantum dots. Level spacings in a two-probe quantum dot are irregular due to the effect of charged impurities in the materials used.

In 1993 it seems that the calculation of a full spectrum was not possible yet. I imagine that soon I will be looking at more recent literature in which this is accomplished. The simplest calculation method is using the simple harmonic oscillator potential. They also assume a non-interacting system where the added electrons don't change the potential shape or strength.

They show at the end of the paper experimental results comparing to their theoretical expectations. Due to some notable discrepancies, they conclude that the constant-interaction model is not quantitatively correct. They claim that this is because it is not self-consistent. I am not totally sure why they claim this. Perhaps it will come to be clear to me in time.

The line shape for electrons on quantum dots is Lorentzian. The following analysis places some constraints on the physical design of the quantum dot such as a minimum width criterion for the barriers.

The last section includes a few of the basic applications that were forseeable at that point in history. It is interesting to me that this article predates the quantum computation fad that has swept much of condensed matter physics and certainly the sub-field of quantum dot physics.

Workplace Hazardous Materials Information System

Here are some of the hilarious, interesting, or scary tidbits that I wrote down when I took my WHMIS training in Feb 2011.

Compressed Gas

Many of the pictures the presenter showed us were actually taken during lab inspections at McGill. The first of these was a picture of a compressed gas cylinder with a backpack hanging from part of the regulator assembly on top of the cylinder.

One litre of liquid nitrogen can displace 700 litres of air. There is a risk of asphyxiation if you are in a small room.


The flash point of a material is the temperature at which it releases enough vapour that it can be ignited. I have definitely heard the term flash point used in a very different sense (in common usage), such as referring to the temperature at which a material will suddenly explode or burst into flame.

There was a fire/explosion at the Montreal Neurological Institute because someone left some sort of chemical on a hot plate and left the lab. The teacher suggests a rule: If you are working with volatile things, unplug all the hotplates nearby.

Domestic refrigerators and freezers can sometimes create small internal sparks during their operation. This means that if you store volatile substances in these places, you can end up with an explosion. Buy things that are certified for storing volatile products (they won’t spark internally, among other things).

The teacher talks about how dangers peroxide crystals can be. They can form in many different ways, and are susceptible to heat, friction, and shock. Any of these things might make them suddenly explode. This is a good reason to keep an eye on expiry dates on chemicals that can form peroxide crystals. The recommendation is that you don’t keep chemicals around for more than a year.

Never store oxidizing agents together with flammable materials.

Chemical Sensitization

Repeated exposure to chemicals can cause sensitization. This means that you become more sensitive to exposure as time goes on. This reminds me of what some friends of mine told me when they were talking about their experience with sick building syndrome. For them, even a whiff of a cleaning chemical used on a hospital floor might be enough to make them physically ill.

Acids, etc

Pour acid into water, not water into acid.

Never store organic acids with oxidizing agents.

Hydrofluoric acid needs special consideration. There is a special cream that you must always keep with the hydrofluoric acid bottle. Why? Hydrofluoric acid will attack calcium in the blood. Within a very short amount of time there is a very high risk of cardiac arrest. Must apply cream to exposed area. You should then have enough time to go to the hospital. Cream is actually kept attached to bottle of hydrofluoric acid in some labs due to this serious health concern.

Karen Wetterhahn and dimethylmercury safety

We were told a very scary story about the late Professor Karen Wetterhahn at Dartmouth college. She was a highly-regarded expert in the area of heavy metal poisoning and she was abiding by all prescribed safety procedures in the lab when she was exposed. During a lab procedure, she dropped some dimethylmercury on her hand (which was covered in a glove). She died less than a year after exposure due to the massive dose of mercury that she had received, but was unaware of for the first six months.

Her accident set off a study that investigated whether the safety procedures were effective. This is when it was discovered that the latex gloves are ineffective at protecting from dimethylmercury. It turns out that dimethylmercury penetrates through latex gloves in less than 15 seconds.

Material Safety Data Sheets

If you are exposed to a chemical, bring the MSDS with you when you get medical help.

Gordon E. Moore: The Semiconductor Prophet

The insights in Gordon Moore’s world-famous paper, Cramming more components onto integrated circuits, have been validated again and again in the decades since its publication.

Upon reading the paper, many startlingly accurate statements are likely to jump out at you. Startlingly accurate that is - because the date of original publication was April 1965.

Here are some of the prophetic insights that leaped out at me:

He says that memory may be distributed throughout the machine rather than concentrated in a single unit. My primary experience with this phenomenon is in the construction of personal computers. Today’s PCs have hard disks, RAM, and CPU cache in order of increasing speed and decreasing size. Additionally, specialized devices such as video cards are increasingly being fitted with their own RAM and even sometimes flash memory. Memory accessibility has proven to be one of the salient difficulties of computer design. Spreading the memory around has made even faster operations possible.

He accurately predicted that semiconductor integrated circuits will come to dominate electronics. The rise of the PC age is a good indication of this domination. Today we are beginning to see semiconductor integrated circuits in pretty much anything that has electric power flowing through it.

His ‘day of reckoning’ thing sounds a lot like the frequency wall that we hit in the early 2000’s. Since the early 2000’s, clock frequencies in mainstream computers have not increased. Today, our top CPU manufacturers focus on improving performance per clock cycle and per watt of power.

He says that we may find it more economical to build larger systems out of smaller functions. Look at our multi-core personal computers, computer clusters, cell computers, and cloud computing. As a consequence of the frequency wall and economics, today’s supercomputers are dominated by multicore and multiprocessor systems. In the last few years we have also been watching the rise of the cloud computing system. Using the power of the Internet, staggeringly huge supercomputers are created out of smaller cells linked to each other through the network. We have only just scratched the surface of how cloud computing is going to change the face of our computing world.

Lastly, this is the piece in which Moore first described the economic relationship that would come to be known as Moore’s Law. His observations are often misquoted and misinterpreted in popular media. He identified a definite trend in the cost of production of integrated circuit components and the number of components per integrated circuit. This has been extrapolated by later thinkers into a plethora of versions of “Moore’s Law” that are claimed to be representative. The accuracy of the later versions is highly questionable. However, Moore’s actual prediction has been remarkably accurate for over four decades.

Getting Into Graduate School

Interested in getting into graduate school? So was I. My problem was that I was rejected from three graduate schools the first time I applied.

I have since been accepted into one of the schools that originally rejected me. During this process, I have learned a lot about how grad student selection tends to take place. I do not claim to be an expert on this topic, but I do feel that my experience and knowledge may be helpful to some people considering this life choice.

Read the rest of the post at Live to Learn.

Leaks, Pumps, and Gauges in Vacuum Science

This post is an experiment. I missed the second class of Experimental Condensed Matter, but I have access to the slides used during the class. My efforts here will be to investigate the topics raised on the slides and to understand them to some degree. The results of my investigation I will write into this post. Curious how well this will work? So am I. Here goes.

This work is based on slides developed by Peter Grütter.

Gas Load and Leaks

Plastic and Metal Diffusion

Gases can diffuse within plastic seals towards their surface. Then these gases can outgas into the vacuumed space, creating a gas load. Similarly for metals. However, metal diffusion gas load of this sort can be well-approximated by a linear decrease with time while the plastic gas load varies with approximately the square-root of time. Therefore after a long time, even the small area of plastic seals can dominate the gas load of this sort.


Permeation leaks will always happen. The best we can do is minimize them by using less permeable materials for the construction of our vacuum apparatus.

Vacuum catalogues include a lot of useful information about the properties of materials that might be used in a vacuum system. Among the information available would be things such as permeability, outgassing rates, and the ability to be baked (raised to high temperature to get out annoying things like water).

Virtual Leaks

Virtual Leaks (Trapped Volumes). There are vented screws with holes drilled through their core, so the trapped air can escape more quickly.

Double Seals

Double O-ring systems employ two seals to guard against leaks through the seals. This technique is extended to the idea of differential pumping. Differential pumping is the technique of pumping out the volume between the two seals down to a relatively low pressure. Grütter gave the example of 1mBar for the in-between pressure.

Pump-down curves

Pump-down curves are a valuable troubleshooting technique. Different sorts of gas load have different pumpdown signatures. We went through this in more detail in the previous article on vacuum systems.

Troubleshooting Leaks: The pump-down curve for a virtual leak is different than for a real leak.

Pressure ranges for leak detection techniques

Big leaks are probably near seals or joints. Grütter gives the number of p > 200 mBar. This is pretty big, about .2 Atm.

If p ~ 1-40 mBar, he says that a good technique is to spray the outside of the system with a ‘volatile organic fluid of low vapour pressure and low flammability such as methanol’. When the spray covers the hole, a substantial pressure change should be observable.

If p <  1 mBar, Helium leak detection using a helium source and a mass spectrometer. A helium mass spectrometer is a device for finding small leaks. A volume of helium is ionized and leaked into the chamber. The spectrometer can then collect the gases outside of the vacuum system. Since the ionized helium will take a different spectrometer path than any of the other gas atoms and ions, it is relatively easy to measure the amount of helium being picked up. This means we can detect the helium that is leaking from the vacuum system.

Tracer probe leak detection is another technique. Here we use a tracer gas and a measurement device that is sensitive to the gas. The device is placed inside the vacuum system, and the tracer gas is shot towards various outside parts of the vacuum system. When the measuring device detects a sudden leap in the quantity of that gas, we have found our leak. Similar idea to the helium mass spectrometer but reversed placement of the tracer gas and measuring apparatus.

Why helium? It is non-toxic, cheap, goes through any cracks, and is only present at about 5 ppm in the atmosphere.


Pumps have a pressure range over which they operate best. Every design of pump has its own ideal operating conditions. It seems that high vacuum pumps are rarely operated in rough vacuum and that ultra-high vac pumps are generally only employed after high vacuum conditions have been attained.

Roughing pumps do the work from atmospheric pressure down to perhaps 10-3 torr. The ‘foreline’ is between the roughing pump and the high-vac pump. It is not clear to me at this point why the foreline has to exist.

Rotary Vane Pump

This is an oil sealed roughing pump. A good illustration of how it operates can be seen on the wikipedia article. A rotating central piece pushes around gases, causing a pumping action from the input line to the output line. The basic idea is to squeeze gas out of the low pressure side into the high pressure side.

This pump in particular suffers from an oil backstreaming problem. Oil backstreaming is when oil molecules outgas and raise the pressure, travelling backwards from the pump back into the vacuumed space.

Oil backstreaming from the roughing pump. How do we deal with it? We looked at one possibility, the Zeolite trap. Molecular sieves can selectively trap certain sizes of molecules. Zeolite traps are a type of these, and can be designed to trap molecules of specific sizes.

Sorption pump

Adsorption is the sticking of a molecule to a surface. The sorption pump adsorbs molecules on a very porous material. The inner construction of the porous material is many small fins to maximize surface area. Used primarily as a roughing pump. Typically cooled with liquid nitrogen. Can achieve 10-2 mBar to 10-7 mbar (with additional special techniques).

No moving parts, and thus no need for lubrication (with the associated possibilities for backstreaming). However, it cannot be run continuously because it is limited in the total volume of gas that it can effectively pump. Also, it cannot effectively pump hydrogen, helium and neon. More specifically, it cannot effectively pump any material with a lower condensation temperature than (liquid) nitrogen.


Use very low temperatures to condense volatile gases out of the vacuum volume. Obviously this can only feasibly work with certain gases. Also, this would impose some limits on the materials used in the construction of the vacuum system and the experiment. If the experiment can be done in extremely cold conditions, this is one way to help achieve ultra high vacuum.


Cool things down so that gases can be adsorbed. Or this can merely slow down the impinging gases, cooling them down, effectively trapping some of them nearby, improving the vacuum.

Oil diffusion pump

Now we look at the Oil diffusion pump. It seems that the idea is basically to blast a jet of material that will carry with it the gases present in the volume towards the far end of the pump and the exhaust. It is called a diffusion pump because the gas is diffusing back towards the vacuum, but it get carried away by the jet of material.

The oil vapour molecules are accelerated to a speed of more than 750 mph apparently. The hot ejection is into the foreline. There we also get degassing of contaminated oil. These pumps can be constructed in a multi-stage fashion so that different purity of oil is used at different levels of vacuum.

Pumping speed follows a relatively predictable shape that is quite interesting first it increases to a steady speed, then stays there until a ‘critical point’ after which there is a fairly sharp drop-off to about half that pumping speed, and then a more gradual exponential decay.

To deal with foreline pressures getting too high, you can introduce some chilled surfaces using some liquid nitrogen. The chilled surfaces will reduce the pressure.

LN2 Trap

Since several types of pumps have backstreaming oil, we would like to stop this oil from getting to the vacuum chamber. Between the pump and the vacuum chamber, we might choose to have a LN2 trap. We use the low temperature of liquid nitrogen to condense out most contaminants, including oil, so that they are not present in the vacuum.

Turbomolecular Pump

The turbomolecular pump relies on a transfer of momentum to gas molecules from spinning rotor blades. Each set of spinning rotors tries to knock the molecule down to the next level of the pump. Between each set of rotors, there is a set of stators that are designed so that the molecules that are hit by the rotors are likely to fly down to the next stage.

All of these pumps are multi-stage, with each stage representing a compression of approximately 10. The rotor blades must be spun very quickly (up to 1500 times per second!), making some sort of bearingl necessary. However, since oil presents problems for the achievement of ultra high vacuum, some of these pumps now use magnetic bearings.

This pump is generally capable of achieving and maintaining high vacuum. It can achieve ultra high vacuum in some circumstances. Typically it is employed in conjunction with a roughing pump since it does not operate well near atmospheric pressure. However, since 2006 models have existed that can exhaust directly to atmospheric pressure.

Larger and heavier molecules are easier to pump than lighter ones in this case. Molecular hydrogen for instance is quite problematic for this pump to move. This is one of the reasons why the maximum vacuum achievable with this pump is only moderately high.

Sputter Ion Pump

The sputter ion pump operates on the principle of ionizing atoms and then using a strong electric potential to move them to a desired surface electrode. The molecules then strike the surface and undergo one of chemisorption, physisorption, or neutralization as they steal an electron and fly away. These neutrals are likely to be ionized again and sent back to the surface. Eventually they may become attached to the surface as neutral molecules.

This pump is capable of achieving 10-11 millibar vacuum under ideal circumstances. It has no moving parts. Grütter calls attention to the fact that titanium is very reactive. Perhaps making it a very good candidate for the electrode material.


Depending on the level of vacuum that we are trying to achieve, we need to employ different types of gauges.

Bourdon Gauge

Operates under the principle that a flat tube tends to become more circular when the pressure inside it rises. This effect seems rather small, but there are ways to amplify it. One of these ways is to arrange the tube in a “C” shape where its motion is more noticeable. This motion can then for instance be connected through gearing to a needle that will display the pressure.

These gauges are very linear and can be reasonably sensitive. However, they will not operate in high vacuum or ultra high vacuum.

Pirani Gauge

The Pirani gauge is a type of thermocouple gauge. A metal filament is heated within the vacuumed chamber. When there are many molecules striking it, it loses its heat to them, causing its own heat to decrease. Conversely, when there are very few molecules striking it, as is the case in high vacuum or ultra high vacuum, very little heat will be leaked away. This means that the temperature of the filament will be higher when it is in higher vacuum.

Since resistivity varies with temperature, we can characterize the metal filament at various temperatures so that we understand what temperature it is at when we observe a particular resistance.  When we understand what the temperature is, we have an indicator of the pressure.

Ionization Gauges

Electrons are emitted from a hot filament. These electrons are then attracted by a helical wire (or set of wires) that is held at a positive potential of ~150V. As the electrons fly across the intervening space, they tend to run into any molecules flying around out there. The electron is likely to ionize the molecule, causing it to lose an electron, and thus become positively charged.

The ionized molecule will be pushed away from the spiral electrode and towards a central wire that is held at a negative potential of about -30 V. The ions will strike the central wire, creating a small current. This current is then amplified and measured. Combined with calibration of temperature to pressure, we can then get an accurate pressure reading.

This is the most widely-employed device for measuring vacuum pressure between 10-3 and 10-10 Torr.

Rest Gas Analyzer

These devices are in effect small mass spectrometers. They measure mass-to-charge ratios. They are used for very low pressures. Their highest operating pressure is about 10-4 Torr. Their sensitivity is remarkable, as they can measure partial pressures down to 10-14 Torr.

How do they work? Gas molecules are ionized by a beam of electrons. The details of the electron beam creation from a hot filament dictate that this process is best conducted at low pressures. The presence of a lot of oxygen for instance can be bad for a filament.

The next step is a Quadrupole Mass Analyzer (QMA) which is capable of selecting a specific range of mass-to-charge ratios for analysis. The QMA will allow these selected ions to pass through while all others will strike the sides.

The ions are then collected by a detector such as an electron multiplier. The final result is a graph of mass-to-charge ratio with intensities. So now we know what the mass-to-charge ratio is for the materials in our vacuum, and we also understand the relative number of ions of each type. Using our knowledge of mass-to-charge ratios, we can figure out what molecule each signature is for.

The difference in the spectra of two different system states can be striking. Grütter shows an unbaked normal vacuum system as compared to a system with an air leak. In the unbaked system, H20 is the largest overall signal with some H2, CO2, N2 and CO. The air leak system looks very different, with N2 dominating, followed by H20 and 02, and finally CO2 and H2. These differences in composition can be very helpful with ascertaining the nature of a leak or unknown gas source in the system.

Scratching the surface of surface science

The primary resource for this material was a lecture by Peter Grütter.

Why has it been reseached?

The semiconductor industry is probably the primary reason why surface science has received so much attention. It is well established in industry and in science. The development of the tools and techniques has been driven primarily by the semiconductor industry.

Another major driver is catalysis. This is the area of study of how to catalyse reactions/processes so that they can happen more quickly and/or with lower energy requirements. Surfaces can be extremely important for catalysis. With regards to catalysis, the sites of interest on the surface are actually the kinks and defects rather than the flat surface itself.

Small features can be of primary importance in many of these condensed matter fields of study. Dr. Grütter calls attention to the fact that in the semiconductor industry, the doping atoms among the silicon of crucial to the operation of the devices.

Introduction to Surface Science

In this class, we will be talking primarily about solid-vacuum interfaces rather than solid-liquid interfaces. We are building on the knowledge we gained in the introductory sections on vacuum systems.

Surfaces are 3 dimensional. They are not merely two-dimensional planes. They are a layer of transition from bulk conditions to vacuum conditions.

The dipole layer is an interesting physical phenomenon that takes place at the surface of a material. Electron density does not drop off to zero once we are outside the surface atoms. It tapers off, becoming negligible some small distance away from the surface. This distance is on the order of one fermi wavelength, which would vary depending on the material.

So some negative charge ends up outside the surface. The only picture I can find of this effect online is here, even though it is given in terms of electrostatic potential rather than electron density. Rather than a smooth drop in electron density, we end up with a periodic (on the scale of fermi wavelengths) charge density as we look into the surface. Thus, just inside the surface we actually have a higher electron density than we do further into the bulk of the material. This interface between the high internal electron density and the low external electron density is called the dipole layer.

The dipole layer can stop atoms from diffusing out of the surface. As they diffuse towards the surface, they suddenly come up against a larger density of electrons, which push them away. In the image linked above, the diffusion would be taking place from right to left. The lower potential pushes back on the atom’s electrons, causing it to have more difficulty getting through the surface than it had moving throughout the bulk of the material.

A few Observations

As we already know, taking an electron out of the surface will take some energy. The amount of energy depends on several things such as strength of bond to ion core, interaction of electrons with each other, etc. This is known as the work function. There are two versions, one considers the energy needed to move the electron to just outside the solid surface, while the other considers the move of the electron to infinity).

The work function depends primarily on the dipole layer. Can be different work functions for different surfaces (faces) of crystals! Depends on the orientation of the atoms. Work function also depends on step density. What is a step? Consider a perfect planar surface of atoms. Now consider adding another layer to half the surface, so that there is a ‘step’ up to the second layer. There can be many such steps. As a heavy and long-time computer user, one of the first things I visualized was the fact that an angled line on a computer monitor is not smooth, it has ‘steps’ made of straight sections. Similarly with a surface viewed at the nano scale. The closer the steps are to each other, the higher the step density. Step density changes the work function because of the details of the dipole layer at each step.

Question that you must learn to ask yourself: You must ask yourself if what you are studying is affected by small defects in the system. In the history of science this has been overlooked many times. How big of an effect can these things have? Well, it turns out that a 5% difference in work function for Tungsten can be created by step density. Even more astounding, a 1 eV difference in work function can be measured depending on tungsten orientation! 1 eV at the nanometer scale indicates a huge difference in electric field. These hugely different electric fields can help explain why such small defects can often have a large effect on chemical reaction rates via catalysis.

Surface Energy

The simplest way to explain surface energy that I can find is from Wikipedia, where it is stated that surface energy can be defined as the excess energy at the surface of a material compared to the bulk of the material.

In class, the first thing we discuss about surface energy is the jelly model (jellium), which is quite similar to the plum pudding model. It feels almost heretical to be talking about this, since this class is in a building named after the man who proved that the plum pudding model was wrong (Ernest Rutherford).

We can calculate surface energy for jellium quite easily. This tends to agree with experiment at low densities, then eventually becomes very broken at higher densities. The more complicated (and accurate) models are quite difficult to calculate. Additionally, the surface energy is very hard to measure experimentally.

Surface energy is crucial to our understanding of many physical aspects of surfaces. For example, it helps us understand how we can grow materials on other materials. Will we get island growth or layer-by-layer growth?

One of the reasons this is difficult to model correctly is that the electron correlation effect between d-orbitals are difficult to calculate. This is why estimating the surface energy of elements such as gold, iron, etc involves very complicated calculations.

It turns out that finding the minima of surface energy will show us the shape of an equilibrium crystal. Real crystals may not completely agree because our physical crystal growth is not perfect. In closing, surface energy is important for studying crystal shapes as well as understanding what materials we can grow on what substrates and how they grow.

Surface Structure

There are three major ways in which the surface structure can be very different from the bulk structure.


The spacing between surface atoms and second layer is often not equal to the distance between the 2nd and third. Surface atoms tend to get pulled in little bit because they do not have a bond on one side. This is true for both covalent bonds and metals. This relaxation may be up to three layers deep (distances grow towards lattice standard as we go deeper).


Where the surface structure is different from the bulk. For example, there might be more atoms on the surface layer than in a bulk layer. They may be connected to each other at different angles. Thus, the unit cell of the surface crystal can be very different from the bulk unit cell. We actually cannot calculate some of these structures because they are too complicated.

Related aside, Dr. Grütter began talking about silicon (111). He said, “This was the Guinea Pig or Drosphila of surface science for a number of years.” Apparently about 20 years of work went into understand silicon (111), which has what is called a “7x7 reconstruction” comprised of 64 atoms in 4 layers. The problem was eventually solved by a combination of scanning tunneling microscopy and diffraction studies.

Aside from the aside: This is not the industrially relevant silicon unit cell. That role is filled by silicon (001). Dr. Grütter says that it is very important that one can grow very smooth layers of oxide on silicon (001).

Aside3: Dr. Grütter says that silicon cannot be used as a photon emitting material very well because this would violate momentum conservation. However, gallium arsenide is capable of being a useful photon emitting material.


Most materials are an alloy, there are multiple constituent elements. Will the surface layer be the same composition as a bulk layer? It turns out that often surface layers are usually completely different than the bulk in terms of composition. Surface might be all of one element. Second layer might be a split of some kind. Third layer might be a different split.

This fact has huge implications for surface characteristics such as the ability to catalyze reactions, corrosion resistance, hardness, etc.

Surface Complexity

We tend to think of surfaces as atomically flat, but they are not. A decent flat surface might have truly flat areas that are 10nm in length. We might be able to get 100nm of nice flat area if we try really hard and employ a lot of tricks.

Some of the forms of imperfections in a surface are:

  1. kinks
  2. terraces
  3. vacancies
  4. adatoms
  5. monoatomic steps
  6. step-adatoms

Curious about what these are? Check out this Wikipedia page which includes some of their definitions.

A fair amount of research has been done on the subject of the effects of these imperfections in surfaces. For example, we have learned that electromigration is affected. Defects can backscatter electrons. This can become important when the surface atoms are a notable number of total atoms in the wire, which happens at the nano scale.

LaTeX on Ubuntu 10.10

When working in LaTeX on Ubuntu in the past, I was reasonably impressed with the development environment called "kile".

To install it, I fired up the System->Synaptic Package Manager and searched for "kile". I marked it for installation and applied the changes.

After several minutes, it had downloaded and installed itself. I then right-clicked on my menu bar in a blank area. A small menu popped up, I clicked "Add to Panel". A dialog pops up. I clicked "Custom Application Launcher". Another dialog shows up. In both the name and command fields I typed "kile". As soon as I completed typing it in the command textbox, the icon visible on the left side of the dialog changed to the kile icon. I hit OK.

Now I have a Kile button on my main menu bar. I fired it up.

At the top of the screen is had a button that said "ViewHTML". I want to be working with PDFs not, HTML. So I clicked the little down arrow to the right of the button. A menu popped up. I selected "ViewPDF" from the menu.

This is when I found out that I do not have "okular" installed. It is a viewer that kile integrates well with. I go into Synaptic Package Manager again and search for "okular". I download and install it.

Now when I load up a .tex file, such as my blank tex that I created, I can click "PDFLatex" to build the tex into a pdf. Then I can click ViewPDF to view it.

Working with LaTeX on Windows 7

I just had some trouble setting up LaTeX on Windows 7. I think I had some of these issues in the past however when I performed this installation years ago. The following steps are in some ways more of a guide for myself in the future than anything. Obviously I am putting it online in the hopes that someone else may find it useful as well.



Download and install MiKTex. There are some options on how to do this. What I did was select the "MiKTeX 2.9 Net Installer". I had to download a file called "setup-2.9.3959.exe" which I then had to run twice. Why did I have to run it twice? Because it would only do one of two things: 1) Download the MikTex distribution onto my hard drive (compressed) or 2) install the distribution FROM a place on the hard drive.

So I had to run the setup executable once to download the distribution, and once to install it.

Important Note: When the setup asks you something like: "You can choose whether missing packages are to be installed on-the-fly: " answer "Yes". If you don't answer yes, it can create problems with integration with TeXnicCenter.

Adobe Acrobat Reader

If you don't already have it, you can get it at the reader homepage.

Ghostscript and GSview

If you want to be able to use .ps files. Download Ghostscript, and GSview. Install Ghostscript by running the executable, then do the same with GSview.


You can download this piece of software at the TeXnicCenter homepage. I didn't have any troubles with the installation until...

When you launch TeXnicCenter for the first time, it will ask you for the location of your LaTeX files. In the case of MikTex, I found them in C:\Program Files\MiKTex 2.9\miktex\bin

You may also need to know the location of your .ps file viewer. Since I installed Ghostscript and GSview as described above, my .ps viewer was in C:\Program Files\Ghostgum\gsview\gsview32.exe

There, now you should be able to load up a .tex file in TeXnicCenter and build it and view it.

General Trends in the History of Circuits

The material for the post is based primarily on a lecture by Thomas Szkopek in the class Nanoelectronic Devices that I am taking at McGill.

Electrons are very light and have a definite (constant) charge. Charge to mass ratio is a primary reason why electrons are better than nucleons or mechanical devices for the creation of semiconductor electronics.

What is a transistor? It is ‘transferred resistance’. We can control the resistance of a lump of material. By controlling the resistance we can control to flow of current through a semiconductor. This is the primary basis for decision making at the circuit level.

We have built more transistors than anything else? (is this true or is it computer bits (like those in a hard drive)? not sure what he said).

In semiconductors, germanium was eventually replaced by silicon. Why?

Not because of cost or availability (initially). It was primarily a question of easier fabrication. The key facet was the quality of the oxide you can grow on the silicon rather than on the germanium.

There has been a lot of talk for years about how this or that material was going to replace silicon. None of them have yet done so because silicon is really well established and quite good at what it does. “Gallium Arsenide is the material of the future and it always will be.” - Szkopek.

Why smaller and smaller integrated circuits? By making the parts smaller and closer together, we can eliminate a lot of the resistances, capacitances and inductances as well as reducing our overall material usage. This should mean cheaper integrated circuits that require less materials to create and less power to run.

Gordon Moore

Gordon Moore was a chemist by training but was also one of the most successful electronic engineers of all time. What was Moore’s major contribution? He figured out how to grow high quality oxide on silicon.

As a computer scientist, I am well aware of some of the many different ways Moore’s Law has been mis-represented. So what is it actually? We read “Cramming More Components onto Integrated Circuits” the famous paper that Moore wrote in 1965, from which ‘Moore’s Law’ was extrapolated.

Moore's Law: Relative manufacturing cost per component and number of components per integrated circuit (diagram in paper, or on the Wikipedia article).

Cost increase as we move to the right is primarily because fewer chips are successfully made when we try to jam more components onto the wafer. There is a minimum cost for each technology level.

Atomic Scale

What happens when transistor dimensions approach 10nm? We are looking at atomic scale.

He then showed us some pictures taken with scanning-tunneling microscopy of a tiny surface with iron atoms on it. The atoms were physically arranged into a circle. As this symmetry is created you can see the creation of a symmetric pattern of standing waves of electron position in the center. This is an incredible (and graphic) demonstration of quantum mechanics in action. The pictures are from this paper on “Confinement of electrons to quantum corrals on a metal surface.”

One of Szkopek’s main points with regards to these photos is that atomic scale disorder is going to be present when we are working at such ridiculously small scales. Some of this disorder can be dealt with through more careful manufacturing and usage techniques, but we are definitely getting into the realm where we are starting to touch upon the omnipresent low-level disorder of the universe.

Szkopek says that Intel is currently using a 1.2nm oxide layer. That is 4 layers of atomic oxide. We are at the atomic scale, and will have to be considering the facts that govern it.

In closing, Szkopek talked a bit about how the nice formulas we tend to see in the theoretical sections of courses devolve into complicated, ugly looking things when we try to do real problems. There is a tendency to term this “things getting crazy”. Szkopek makes his point clear when he closed the class with: “Things don't get crazy, they get physical!”

Basics of Vacuum Systems

The primary source of knowledge for this post is the first lecture of Experimental Condensed Matter which was delivered by Peter Grütter.

The primary things he wants to teach us in this section are:

  1. Basic factors determining pump down times and ultimate pressure achievable for physical vacuum systems.
  2. Material choices in vacuum applications. These can be important for many reasons. An example would be the necessity of high-temperature tolerance in those situations where we need to achieve very high vacuum since that is only possible if we ‘bake the system’. More on this later.
  3. Some basic hands-on demonstrations of these concepts.

He believes that memorizing big formulas is not a high priority. His position is that the information is available as long as you know your basic concepts well enough to know that you should go look it up. When designing physical systems, we need to know the right questions to ask.

Vacuum systems are utilized for a wide variety of scientific and industrial applications. Some things brought up were signal to noise ratios for sensitive instruments, crystal formation, and insulating devices. This list is severely incomplete even as an outline of the applications that he spoke about in class.

Mean Free Path

The mean free path of a particle is the average distance travelled by a particle between successive collisions. These collisions can be with other gaseous particles or with the walls of the containing vessel (if there is one).

If we need a molecule to land on a specific surface with a well-defined energy, we need to maximize its mean free path so that the other gas particles are not colliding with it and thus changing its energy. When creating layers of materials on top of a substrate, it is often necessary to hit the surface with molecules of a specific energy.

Well-defined surfaces

On every surface, there is always contamination from the atmosphere. Much of this contamination is in the form of water. Water is polarized, so it will stick to anything. Tens of nanometers of water can build up! Even on hydrophobic surfaces there is a layer of water. We use high vacuum to help make a clean surface.

Contamination, even with water, can be bad for many reasons. For example, the surface tension of water (because it is so strong) can damage instruments such as a scanning tunneling microscope. You may also end up measuring the first layer of water and other deposited materials instead of the primary material of your surface.  The layer of water can also cause warping in thin films of material. You usually have to bake a vacuum system to get the water layer off of the surfaces.

To keep a surface clean for longer than a few seconds, you need a very high vacuum. Even millibars of atmosphere will deposit all kinds of junk.

The atomic version of sandpapering is shooting ions at the surface. This can actually damage the underlying surface as well, so you need to heat it again and anneal it to restore the structure.

Dr. Grütter talked about the following process flow in surface science:

  1. Clean surface
  2. Anneal it
  3. Characterize it
  4. Do experiment
  5. Write paper
  6. Graduate (Or get a raise.)

Even one part in a million for a surface layer can be a lot of impurity. Sometimes we do the first two steps: “clean surface” and “anneal it” in a cycle for weeks or months before we will have a clean crystal or surface.

Question: How long does it take to cover a clean surface with 1ML?

First of all, the ML is a monolayer (one layer of material). Our class answered this question with estimates between 10s and 1 microsecond. These were just guesstimates. A closely related question is: How many atoms z strike a surface area per unit time at a given pressure?

Where we have velocity u and particle density n. Writing this in more a more useful way gives us:

Where is pressure, is Avogadro’s number, is the molar mass, is Boltzmann’s constant, and is the temperature.

Our back of the envelope calculation in class was a demonstration of a rule of thumb in vacuum system design:

It takes about 1 second to absorb one monolayer at one microbar of pressure with a thermalized gas.

Aside: Adsorb is not a typo, this is referring to the process by which another atom or molecule becomes attached to a surface. Wikipedia defines it in terms of adhesion to a surface.

Under these conditions, we would need to do our experiment very quickly if it was sensitive to a monolayer of atoms on the surface. Having to do experiments quickly is problematic because the signal to noise ratio goes down.

Surface properties can become very important at nano scales. For example, nano wires might have as many surface atoms as bulk atoms!

Second layer and deeper layers can be very different than the first layer (and often are). This depends on a number of factors including the sticking characteristics of the layers.

Basic components of a vacuum system

Here we look at a very simple layout for a complete vacuum system.

  1. Pressure sensor
  2. Vacuum system - the containment vessel for the vacuum.
  3. Pump

We will look into some of the following topics in more detail later on.

The connectors between the parts of the vacuum system are very important. The width of the connecting tubes has a great effect on the effectiveness of the pump in terms of maximum vacuum attainable and pumpdown time (the length of time it takes to create a vacuum of a given level).

Which of these components leak, and how much?

Long pipes can also be problematic. Length of pipe linearly reduces the effectiveness of the pump for creating vacuum.

Location of the vacuum gauge matters. If you buy a system, where were they measuring its specifications? Manufacturers will often measure its statistics right at the mouth of the pump, while what you care about primarily is the effective vacuum that the pump can create in your vacuum system. For a real measurement, you would place the gauge in the evacuated space (the experimental space) rather than in the connecting tube.

Pressure Units

Definition: 1 Standard Atmosphere: 760 TORR or 1013 millibar (mbar) at sea level 0ºC and 45º latitude.

Many people don't distinguish between TORR and mbar, despite the ~30% difference between them. Why? The performance of these systems depends very strongly on the specific gases inside. The calibration curves generally do not take into account the composition of the gas. The error is usually about 25% for the calibration of vacuum systems. This is why it often does not matter very much whether you consider the units of pressure to be TORR or mbar.

Where does TORR come from? 760 mm of mercury is one atmosphere.

Partial pressure

Each component gas in the atmosphere (or any contained gas) has its own pressure. The sum of all the partial pressures gives you the total pressure. In the atmosphere, some amount of the total pressure is due to each of: Nitrogen, Oxygen, Argon, CO2, etc.

Pressure sensors can sometimes detect certain types of gas more than others. Dr. Grütter mentions that Xenon and Oxygen might be detected differently with the same apparatus. Instruments may not be able to measure pressures accurately in all cases. This must be taken into account.

Vapour Pressure

We look at the vapour pressure of water at various temperatures. Even ice at 0ºC has some non-zero equilibrium vapour pressure. This means that it off-gasses water molecules. This means that even ice in your vacuum system will emit vapour. This will stop you from achieving ultra high vacuum conditions.

General pressure ranges

Rough (Low) Vacuum: 759 to 10-3 mbar
High Vacuuum: 1x10-3 to 10-8 mbar
Ultra High Vacuum less than 10-8 mbar

The difference between high and ultra high is essentially the fact that these systems need to be baked to get out the residual water molecules. Thus, the material choices are much more limited if you need to reach ultra high vacuum since they will need to be capable of withstanding temperatures of 100-200ºC.

If you want to do surface science, high vacuum is generally not good enough. You need ultra high vacuum.

How do we create a vacuum

Here we discuss gas flow conductance where we draw an analogy between gaseous flow and electric current flow.

Viscous and Molecular Flow

Viscous (or turbulent) Flow is characterized by momentum transfer between molecules. What is most important is how the molecules interact with each other.

Molecular Flow is the state where molecules flow essentially independently of one another. Typical collisions are with the walls rather than with other molecules. Here we can treat molecules independently.

Consider gas conductivity as analogous to electrical conductivity because we can add up resistances in parallel, series, etc in a similar fashion.

Mean Free path and Molecular Density at various pressures. In air under standard conditions, the mean free path is about a micron. Under ultra high vacuum we can reach mean free paths of 50+km.

Interplanetary space is high vacuum. Interstellar space is very high vacuum.

Mean free path over the characteristic dimension of the containment vessel is a useful quantity for defining various flow regimes. If the mean free path is the same as or longer than the characteristic dimension, we have molecular flow. If it is shorter, then we are in the realm of viscous/turbulent/laminar flow.


Conductance for Viscous flow in a cylindrical pipe: The conductance is inversely related to pipe length and proportional to the 4th power of diameter!!! This means that under viscous flow, doubling the pipe diameter will increase the conductance 16 times.

Conductance in molecular flow (long round tube) equation:

T is temperature, M is A.M.U., D is diameter in cm, l is length in cm, C is in litres per second. If you want to see the derivation of this formula, there is a paper online that has it.

Pumps will have a given pressure p and pump speed S. The throughput of the pump is Q = pV/t = pS where t is time. The effective pressure is not equal to the pump pressure p.

This means that S_eff ~ C for C << S. This condition is common. This means that the connection is usually the limiting piece, not the pump. This effect is noticeable in terms of both pumping speed and maximum vacuum attainable.

Pumpdown time

The pumpdown time for a given volume , with an effective pump speed , an effective pressure , and a starting pressure of .

The 2.3 came from converting ln to log10.

Gas Load

Gas load is the sources of gas in the vacuum system. It is usually written as Q, and is expressed in mbar litres per second. The gas load for a typical system is due to:

  1. Out-gassing of surface atoms.
  2. Permeation from outside. Nothing is perfectly impermeable.
  3. Leaks (both real and virtual). What is a virtual leak? It is a trapped volume of gas due to bad design of the system. For example, some gas can be trapped in a screw hole below the bottom of a screw.  Screws are usually put in during 1 ATM. Screws may be put in very tightly, but the trapped gas can slowly leak. This can be fixed by drilling the hole right through. Bigger hole means that the gas can move out more effectively, leading to higher vacuum because there is less flow ‘resistance’. Alternatively you can drill an extra angled hole as a shunt.
  4. Diffusion
  5. Backstreaming from the pump side.  Pumps aren’t perfect. They allow some gas to escape back into the system.

Troubleshooting a Vacuum System

You had a working vacuum system, you shut it down, change some small things, and fire it up again. Now your vacuum is nowhere near as good as it was. What happened? What can you do?

Even a single fingerprint can have huge consequences for vacuum systems. Consider the number of fatty (oil) atoms from a fingerprint that is a few microns deep. These atoms can be a big problem if you are trying to achieve ultra high vacuum.

One major troubleshooting tool is the pumpdown curve. The pumpdown curve is a graph of pressure vs time. To see an example of one, see here.

The key facet to this analysis is that different types of gas load have different signatures on the pump down curve. The initial volume of gas in the vessel is typically gone quickly. Then we see the effects of surface desorbation, diffusion, and permeation in that order. Permeation never goes away because it is constant.


O-ring seal

The O-ring seal, or ‘quick flange’ is a typical easy-to-use seal. It involves 2 pieces, an O-ring, and a clamp. The clamp closes on the flange, pressing on the angled metal sides, causing the sealing action.

Not much force is actually needed to seal these, despite what people tend to think.

Scratches are the most major problem. A single scratch that crosses the O-ring area will leak atmosphere. Tightening the O-ring will not stop the leak from a scratch. The only way to fix it is to either polish the surface again or get another piece.

Vacuum grease is used a lot for quick flanges. Dr. Grütter isn’t a big fan of vacuum grease because it tends to acquire dust particles when it is in normal air. Dust particles are often silicone dioxide, an incredibly hard substance. Apparently it is harder than the metals that form the O-ring seal, because use and re-use of vacuum grease can lead to scratches from the dust particles.

CF (Conflat) Seal

The CF or Conflat seal uses a copper ring that is squeezed between two metal pieces that have small knife edges on them. The knife edges cut into the copper, creating an incredibly good seal. These seals can be used for achieving ultra high vacuum.

Overtightening can be very bad, since the knife edges can sometimes hit each other after they have cut through the copper.

The copper rings are generally not reusable, but they cost only about $1 each so they are cheap to replace.

The bolts must be tightened in a pattern similar to that for tightening bolts on a car.

How do you mount it sideways? Find a way to hold the ring in place while you put on the other side of the seal. Dr. Grütter says you can use a bit of scotch tape to hold it there while you put the seal together. Once you have the seal together, take off the scotch tape. If you bake the system you will have melting/burning scotch tape to deal with. Another technique is to use a gold loop to hold the ring up while you seal it.