The 2 square-inch detector at the heart of HPF

A little history

Among the variety of components that make up HPF, the mammoth 10m Hobby-Eberly Telescope and the hefty 200lb diffraction grating are certainly the most imposing. Although these are among the best and most advanced tools in modern astronomy, they would be fundamentally recognizable to astronomers from before the modern era: Galileo Galilei was an enthusiastic promoter and salesman of telescopes during the early 17th century and Joseph von Fraunhofer first used a prism to refract and study the spectrum of sunlight in the early 19th century.

An early Dutch telescope – “Emblemata of zinne-werck” (Middelburg, 1624)

These early astronomers would certainly be impressed by the tiny 2 square-inch piece of semiconductor sitting at the heart of HPF. This tiny device is what will record the infrared (IR) light from M dwarf stars that HPF will study, changing the photons from the star into electrons that can be processed by electronics and computers. In a sense this detector is the fundamental piece of technology that makes HPF possible.

Up until the late 19th century, the only detector astronomers had at their disposal was the human eye. Fortunately, the human eye contains an excellent light detector: our photosensitive rod and cone cells (along with the constant adjustment of the iris) allow us to see a wide spectrum of light with an impressive dynamic range. So these early astronomers were able to explore the solar system and solar neighborhood, and even to study the spectra of bright objects.

Fraunhofer demonstrates his spectroscope – Richard Wimmer – “Essays in astronomy” – D. Appleton & company, 1900

Two significant issues with “eyeball” observations are: 1) The eye doesn’t have a “long-exposure” mode, familiar from digital cameras (HPF team member Stefansson is an expert), in which images and details can be made out in very dark scenes, simply by exposing for a long time. 2) The eye cannot see large portions of the electromagnetic spectrum, including regions like the infrared, where M dwarf stars emit most of their light.

Moving beyond the eye

The need for sensitivity to low light levels was addressed by the development of photographic emulsion plates in the late 19th century. Astronomers could place these plates at the prime focus of a telescope, and expose them for several minutes or longer, revealing details invisible to the naked eye. Despite the inefficiency of these plates (only ~1% of the incoming light was actually recorded), they enabled numerous foundational discoveries, including the structure of stellar populations, the calibration of various techniques for measuring distance (a perennial issue in astronomy), and the expansion and scale of the universe. By the middle of the century, though, the use of these plates has started to reach its limit.

Hubble at the prime focus of the Palomar telescope, where the photographic plates would be placed. He used these plates to discover the existence of other galaxies (island universes) besides our own, and the expansion of the Universe. – AIP: Ideas of Cosmology

The 1960’s saw the key development that ushered in the modern era of astronomy: the invention of systems that could directly turn light into electrical charge. Charge-coupled devices (CCDs), as they are known, not only put astronomical images and spectra directly into a form that computers can understand, but they are vastly more efficient than photosensitive materials (up to 80-90% of the incident light can be recorded). Instead of going to an observatory and obtaining photographic plates, astronomers could put their data onto tapes or discs, or (much later on) simply transfer them over the internet.

 What exactly is a CCD?

CCDs are generally made of silicon (Si) semiconductor, a material that is somewhere between an electrical insulator and conductor. It has electrons in its structure that can be excited into the “conduction band,” which allows them to move around (ie carry current). CCDs are structured so that light (photons) excite these electrons, allowing a count of how much light (how many photons) have been observed. By printing millions of these tiny photo-sensitive elements (pixels) in a grid, and putting it behind the lenses of a camera, one can record images with remarkable fidelity, entirely electronically.

In a CCD, the semiconductor material is doped with certain materials and layered to create an array of tiny pixels, in which the electrical fields can be precisely manipulated. The result is that a given pixel can spend some time exposing (turning photons into electrons), and then can send its accumulated charge across the array (like a bucket brigade) to be read into a computer by an analog-to-digital convertor. The analog-to-digital converter measures the (continuous) voltage from the photo-excited electrons, and translates it into a discrete number that a computer can work with.

Energy levels for a metal, different types of semiconductors, and an insulator. The darkness corresponds to the density of electrons at the level, and the horizontal axis roughly indicates the density of states in a given material. The energy gap between valence (lower) and conudction (higher) bands for a semiconductor is small.

The rise and advances of Si-based CCDs was enabled by the rise of the hundred-billion dollar semiconductor industry, which provides the (incredibly pure) Si-based structures that underlie modern electronic devices. However, a limitation on Si-based CCDs is that the electrons need a kinetic energy of at least 1.1 eV (known as the “band gap”) to make the jump into the conduction band. This translates roughly to light with a wavelength of 1 micron; if the light has a longer wavelength (less energy) than this, it cannot excite the Si semiconductor electrons and will be invisible to the CCD. We are interested in looking at light from M dwarfs out to 1.3 microns with HPF, so we have to use a fundamentally different type of detector.

 How about the infrared?

In the past few decades, new types of IR-sensitive detectors have been developed. These are based on the same principles as Si-based CCDs (detect charge with a semiconductor and process the resulting charge), but they use different types of semiconductors, which have smaller band gaps and are therefore able to detect IR light. These new detectors also have a different architecture for processing the charge and communicating with the controlling electronics: instead of sending the charge across the array it is read out directly at each pixel. They are called CMOS (complimentary metal oxide semiconductor) detectors, because they use CMOS design to fit the necessary components into each pixel.

 The HPF detector

The detector we will use for HPF is made up of 2048 x 2048 pixels, each 18 microns on a side. Each pixel (and the detector as a whole) is made up of two layers: the top semiconductor layer is made from a mixture of Hg, Cd, and Te (call it Mer-Cad-Tel), and the connected bottom layer is Si-based and handles the processing of the charge created in the top layer.

A hybrid CMOS detector much like the one that will be used in HPF. -

A hybrid CMOS detector much like the one that will be used in HPF. – OSA-OPN.org

The two layers are substantially different, and each is very interesting in its own right. The top layer of HgCdTe has to be just the right mixture of these elements in order to have the appropriate band gap for the ~1 micron light we want to observe. Importantly, we also want the detector to be insensitive to light that is redder (longer wavelength/lower energy) than this, because in this region of the spectrum thermal background radiation is a huge problem. The sky, the telescope, and the instrument itself can emit light at these wavelengths (>1.7 microns), and would easily overpower the starlight we want to observe. Fortunately, our detector has just the right mixture of Hg,Cd,Te to prevent it from being sensitive to the thermal background coming from the instrument’s surroundings (redder than 1.7 microns).

Spectrum of light coming from an M dwarf star, compared to that of the room temperature surroundings of the instrument.

Spectrum of light coming from an M dwarf star, compared to that of the room temperature surroundings of the instrument.

The lower layer (the read out integrated circuit, or ROIC) is Si-based, and contains all the machinery necessary to amplify (source-follower), convert, and multiplex the signals of, and to control (reset, apply bias voltages), each pixel. This is quite impressive, since each pixel is only 18 microns on a side! The fact that each pixel has its own controls and readout mechanisms means that the detector is very flexible in how it can be operated. For example, we can address each pixel individually, reading them out or changing their bias voltages. This is very useful for running tests of the detector behavior, and can also be used for fun! (See the figure below, where we changed the biases on a small part of our engineering array to make letters).

Graffito on the engineering-grade array.

Graffito on the engineering-grade array.

After each of these layers is independently fabricated, they are pushed together and “bump-bonded” to create the final detector array that we use.

Our detector was designed and manufactured by Teledyne Imaging Sensors, and the technology used was mostly developed in the build up to the James Webb Space Telescope, which will use a number of detectors of a similar type as ours. It is called a HAWAII 2-RG (or H2RG, or HgCdTe Astronomy Wide Area Infrared Imager, 2k x 2k pixels with Reference pixels and Guide mode, whew!)  These detectors are of excellent quality: they have >70% quantum efficiency, very low noise characteristics (astronomers often worry about the “dark current” and “read noise,” which can mask small signals that you might be looking for). We actually have two of the same kind of detector: one “engineering” grade with a few cosmetic defects, which is not perfect, and one “science” grade, which is nearly perfect. The engineering grade detector is the one we use to develop the routines and characterization we need to make sure that we know how to use the science grade detector and how it will behave.

Despite the excellent characteristics of the detector, there are still some issues that we are working through:

  • Persistence: Some of the pixels on these detectors will “remember” what happened to them, i.e. they still appear to have signal after they have been reset.
  • Inter-pixel capacitance: Pixels can induce (fake) signals on neighboring pixels, since they can be capacitively coupled.
  • Flux-dependent nonlinearities: Pixels can behave differently depending on the intensity of the light they are recording.

Each of these is a tiny behavior, but everything will matter for HPF, because the spectral Doppler (radial velocity) shift we are trying to measure corresponds to about 1/1000th of a detector pixel! We are currently working to understand these behaviors better on our engineering grade detector so that we will understand how our science grade array works when the time comes.

So…

The HPF detector will be the end of the road for the photons coming from M dwarfs we will study: after being born in the M dwarf photosphere, traversing light-years of interstellar space, bouncing off the HET mirror, winding down the optical fibers into the basement, and diffracting off the HPF gratings, the camera will focus them down onto the pixels, where their last action will be to kick an electron in the HgCdTe material. If the M dwarf is wobbling due to an orbiting planet, the Doppler shift will cause the successive photons to land in slightly different spots on the detector. Thanks to the impressive array of technology going in to HPF, especially the detector, we will be able to measure these shifts and discover new worlds.

Posted in HPF Hardware | Leave a comment

The Year in Science: HPF Highlights 2014

It is important to remember that the team building HPF is not comprised of technicians dispassionately filling an order for a new machine, but instead includes many of the scientists who will eventually use the spectrograph for their own research.  We believe these collaborators’ enthusiasm for HPF’s science capabilities will be reflected in their contributions to the instrument, resulting in a spectrograph that is both better built and better used.

In addition to documenting the progress of the HPF build, we have taken the time on this blog to detail a lot of the exciting scientific research being produced by the team, in part because we find it especially gratifying to share our results with the public.  With that in mind, we are delighted to announce that two separate research projects led by HPF team members were selected to appear in Discover Magazine‘s top 100 science stories of 2014!

discover_crop

First, appearing at #59 is the exciting story of how Arpita Roy, Jason Wright, and Steinn Sigurdsson solved the 55-year-old mystery of why the far side of the Moon’s surface looks so different from the side we see from Earth.  The far side has a much thicker crust, and fewer of the dark “maria” like those we see in the face of the “man in the moon.”  According to Arpita and her collaborators, the difference can be explained by the increased temperatures on the Earth-facing side due to the heat radiating off the Earth, which was quite hot following the giant impact that formed the Moon.  You can get the full details of their work on Jason’s blog, or in the research paper.

Images of the near (left) and far (right) hemispheres of the Moon, from NASA's GRAIL mission.  Red/white colors indicate higher elevations, while blue/purple colors reflect lower elevation (Image courtesy NASA/GSFC/MIT/LOLA)

Images of the near (left) and far (right) hemispheres of the Moon, from NASA’s GRAIL mission. Red/white colors indicate higher elevations, while blue/purple colors reflect lower elevation (Image courtesy NASA/GSFC/MIT/LOLA).

Also sliding in as part of #100–which documents the year’s notable exoplanet arrivals and departures–is our work on stellar activity in the Gliese 581 exoplanet system.  This entry includes HPF team members Paul Robertson, Suvrath Mahadevan, Michael Endl, and Arpita Roy, thus marking Arpita’s second appearance in this year’s top 100 list!

As you can see, it has been quite a year in science for the HPF team.  Here’s to pushing back the frontiers  even more in 2015!

Posted in HPF Science | Leave a comment

HPF Thermal Enclosure Setup at McDonald Observatory

Three members of the HPF team recently visited McDonald Observatory in Texas recently, with two goals in mind:

  1. Install the HPF thermal enclosure
  2. Setup a temperature monitoring system in the spectrograph room

DSC_1881

Installing the HPF Thermal Enclosure

The HPF spectrograph will sit in the Hobby-Eberly Telescope Spectrograph Room with light fed to it through a set of optical fibers from the telescope. It will sit in a thermal enclosure from Bally – yes one of those walk-in meat-locker coolers! It is perfect for our purposes, as the insulation box will act as a buffer to smooth out any short-time temperature variations (see confirmation of this in the graph below).

Below are a couple of pictures of the Hobby-Eberly Telescope Spectrograph Room before and after installation:

Before

Before installing the HPF thermal enclosure. You can see a) the HRS enclosure (white box on the left), b) the HPF calibration enclosure (silver box in the middle), c) some of the new uninstalled HPF enclosure panels, and d) the open area where the HPF enclosure will sit.

After

After installation: The HPF will sit inside this enclosure inside a clean room (not installed yet!) on the far side seen from this angle. A 6 foot sliding door – big enough for the HPF to get through – will cover up the remaining opening.

Moreover, you can see a short time-lapse video of the setup process right here:

Preliminary temperature monitoring system

During the last day we installed a temperature monitoring system (see figure below) devised by HPF team member Paul Robertson to measure temperatures at 6 places: high and low a) inside the Spectrograph Room, b) inside the Calibration Enclosure and c) inside the newly installed HPF enclosure. The placement of the sensors are summarized in the illustration below:

Temperature sensor Locations

Temperature sensor locations: Locations of the 6 PT-100 temperature sensors placed in 3 different locations high and low, summarized above.

Annotated photo of the temperature monitoring system, showing: a) An Uninterupted Power Supply (UPS), gracefully supplied by the Observatory, so if there is ever a power outtage we can still monitor the temperature! b) Raspberry Pi computer that logs the temperature from c) the LakeShore temperature monitor.

The temperature monitoring system: a) An Uninterrupted Power Supply (UPS), gracefully supplied by the Observatory, so if there is ever a power outage we can still monitor the temperature! b) Raspberry Pi computer that logs the temperature from c) the LakeShore temperature monitor which interfaces with the 6 PT-100 temperatures mentioned above.

 Nominally, the temperature in the Spectrograph Room is controlled to +-0.3°C, but we wanted to assess this independently, as any temperature changes will cause instrumental drifts, causing the overall radial-velocity precision to degrade. More specifically, our goals of installing the system were to monitor and gain insights into the following:

a) Low frequency temperature variations: These are long-baseline temperature changes, variations longer than a week. The temperature changes between the seasons are a good example. Do these drifts show up at all? If so, at what amplitudes? These questions are interesting as these drifts are the ones that HPF will notice, and where the active temperature control system will come in to compensate.

b) High frequency temperature variations: These are temperature variations around a day or less. Let’s say somebody opens the door to the outside and lets cool air come into the Spectrograph Room in the winter: we observe a temperature dip. In the summer: a temperature spike. Let’s face it: these variations are probably always going to be present. This is exactly why we have to install the thermal enclosure – to buffer these temperature fluctuations out. And, moreover, we want to install the system to get a concrete feel for how how effective these enclosures (the HPF and calibration enclosure) are at buffering out these high-frequency temperature variations.

Below we plot the first few days of temperature data obtained:

LakeShoreLogTotal

Temperature vs Time for the 6 temperature sensors from Nov 15 – Dec 2 (data for Nov 30 missing).

From the temperature plot above we can see the following:

1) The Calibration box (red curves), which is completely closed, shows that it very effectively buffers out high-frequency temperature changes in the Spectrograph Room (green curves).

2) The Calibration box (red curves) drifts with the longer-term temperature changes, as expected, but these same long-term temperature variations are higher than expected from the +/- 0.3°C control setpoint. This issue is currently under consideration. We will continue to monitor the temperatures carefully over the next months.

 

Posted in HPF Hardware, Project Development | Leave a comment