I wrote some stuff on this idea sometime in 2000. I noticed the bandwidth problem last year, 2001. Just now (June 2002) Kragen Sitaker suggested an easy fix that should have occurred to me: Send the data from the station to the earth via laser. I recapitulate here from the beginning and fix a few bugs and confusions.

Really Long Baseline Interferometry

Imagine two satellites of the Sun, perhaps at the Earth’s two stable Lagrange points. These two stations each have an antenna pointing in the same direction, out of the plane of the Earth’s orbit, listening to some broad band at a few cm wavelength. This signal is probably digitized and sent towards Earth by laser where it is received by one of two satellites in near Earth orbit. The sampling rate must be a few GHz. I will assume for now that this link is only forward error controlled as the latency would require something like a TB of buffer for backward error control. In this design the station does not require much RAM.
There is ample energy for a collimated laser beam which is very energy efficient. The laser transmission does not interfere with the microwave receiver at the station. The laser receiver can be in near earth orbit, perhaps two satellites to avoid occlusion by earth. Data buffering and error control on the ground link can be done there.

I propose first to image a nearby galaxy that is well out of the plane of the Earth’s orbit for it will present a still image for the required 6 month long exposure. For a given few seconds of gathered data, a point source in the galaxy will provide a point signal in the cross-correlation of the signals from the two stations. The position of this signal in the convolution identifies accurately where, in the direction between the two stations, in the galaxy the source is. Correlating the signals from the two stations produces a galaxy image projected on an axis parallel to the current vector between the stations. If such projected images are collected for 1/2 a year then we will have the same sort of data that CAT scanners use to produce their useful images. This is the tomography problem that was studied first by Radon in 1917. It is now called the Radon transform. Actually we need the inverse transform. Here is a good theory synopsis. Here are some claims on efficient computing of such transforms. Our project would go well beyond current practice. The ultimate image would have about 1017 pixels or about 106 pixels per imaged star. Image compression would serve most users.

Kragen proposes studying other nearby planetary systems in our galaxy. There are special problems with this task as the subject is moving on roughly the same time-scale as the tomographic slices are being taken. Ignoring the spinning of the remote planet for now, the first problem is to form a theory of the orbit of the planet. This is a bit like the problem of a GPS receiver acquiring satellites. There are several stages of starting from a theory of one precision and improving it. I don’t know how to do this but I bet it can be done.

Several years ago some nearby star was analyzed frequently for brightness and color. A model was built that included rotational rate and angle of spin axis to vector from here to star. Then several bits of information were produced describing where on the star the bright spots were. I don’t know if there is developed math for this.

The LATOR experiment requires some similar equipment.
Optical wave lengths?