top of page

Artificial Intelligence and Astrophotography

Over the past decade, research into neural networks has produced countless stunning results in the fields of photography and image processing. For example, using style transfer to combine artwork and photography to generate unique images, or the AI based denoiser which powers ray tracing in games and animated films.

Examples of Neural Style Transfer (NST)

Artificial neural networks are similar to the biological neural networks that make up animal brains. These learn to perform tasks by considering examples of the task, normally without any task specific instructions. The neural network itself is made of a series of inputs, some hidden layers of neurons and a series of outputs. Fundamentally this is just simple mathematics. The inputs are numbers which are combined and manipulated according to the neurons, before being connected to possible outputs. The neural network is first trained, usually by being given examples of inputs and desired outputs, and the neurons are adjusted as it goes so its output matches what is desired. For example, a conventional chess AI like Stockfish works by considering all possible moves a player could make, then all of the moves an opponent could make in response and so on. It then assigns a score to the moves that can be made based on the possible final results and picks the strongest move. In contrast AlphaZero is a program based on neural networks. Developed by Google to play games such as chess and go. AlphaZero was trained by playing 5000 games against itself, constantly tweaking itself to improve the chances of winning. This took around 9 hours. After this training AlphaZero was able to defeat the strongest Stockfish AI with 28 wins, 0 losses and 72 draws out of 100 games.

This demonstrates the power of a neural network; after only a few hours of playing against itself, it could hold its own against a conventional AI which has been developed for over a decade.


Neural networks have been implemented into photography work flows in the past, normally for simple image enhancement. As mentioned at the start of this article, AI can be very effective for denoising and 'recovering' detail in very low signal to noise ratio areas without the smoothness and artifacts usually introduced by heavy noise reduction.

Recently however, a program has been released which could have a large impact on astrophotography workflows, as well as producing stunning results in its own right.

Starnet (as the name suggests) is a program based on neural networks which is able to isolate

An image with stars manually removed

and remove stars in photos.

This process done manually normally takes careful tweaking of parameters to generate clean and accurate star masks to carry out morphological transformations for eroding stars. Comparatively, Starnet can complete the whole process in minutes due to the training that has already been carried out on its neural network.

Currently the program is in a very developmental state. While the neural network is perfectly capable, the user interface takes the form of a command line. It makes use of Python and TensorFlow so these must also be installed. The process is very fast compared to the manual approach as well. When run on a fast CUDA enabled graphics card with TensorFlow GPU it took less than 3 minutes on most images. There were some issues when trying to load very large files. For example, a TIFF version of my Carina Nebula mosaic required some fiddling in the python files to up the maximum number of pixels allowed and failed to produce an image when run with 8GB of RAM. It did however work on a compressed JPG image.

M51 - Whirlpool Galaxy

The darker background in the manual image is as a result of the further post processing rather than Starnet. The stars in this image were removed very well by Starnet .Leaving less artifacts and small stars behind than the manual process, while only taking 30 seconds. However there are a few downsides highlighted here. Some faint background galaxies were removed as well as some of the bright details in the Whirpool. Unlike a manual workflow, using Starnet allows for no customization. The network does not have human readable parameters that can be tweaked , Instead it determines the best output according to it's training. In some situations manual intervention is preferable if star like objects are desired such as galaxies and bright globules in nebulae. The pretrained network presents a few issues of its own. The default weights were trained using a specific telescope and camera. If the image scale or PSF of the optical system vary greatly from the original then the program will have problems fully removing stars. Any images acquired on a Newtonian system with large diffraction spikes for example.

Carina Nebula Mosaic

This mosaic of the Carina Nebula also highlights this issue. While it was shot on a refracting telescope, there are some stars which are much larger than those the network was trained on. As such it struggled with removing the full extent of the larger star halos, leaving behind faint ghosts. However this is problem is quite solvable. As you can download the whole network online, it can be trained using your own data to produce results relevant to your equipment. This takes time and lots of manual effort but makes the program much more versatile.

The issue at the bottom right of the image is due to improper processing of the mosaic, which is made visible by the removal of the dense star field in the region. This information could be useful when reprocessing the image and demonstrates another use for Starnet other than final image creation. There are many occasions to generate a star mask or starless image field in image processing. A starless image can be used to enhance the contrast of a scene, generating a mask to protect either the nebulosity or stars to be used when manipulating curves. Allowing the saturation and lightness of the objects to be manipulated independently of each other. Star masks are also helpful when sharpening the image with deconvolution routines which produce halos are artificial colours in bright stars. All of these can be generated automatically using Starnet.


Starnet is a very powerful tool for speeding up processing workflows but its limits must be taken into account. If your data varies significantly from that the default network was trained on, it should be retrained using starless images captured with the same equipment. As the process is being determined automatically there is a lack of manual control in the extent and range of star removal. However its potential when used carefully should not be ignored. The speed at which starless images can be generated allows for much faster image processing both at the final and intermediary stages. It demonstrates a major step towards implementing cutting edge technologies in astrophotography processing and the use of neural networks will only become more prevalent in the future.

57 views0 comments


bottom of page