Immediate nostalgia activated. I ran this on a Pentium machine (I think) at home, still living with my parents. Sometimes I yearn for the optimism and relative naïveté of those times.
I still have both the screensaver and the moment that I realized that disabling the screensaver allowed processing to happen meaningfully quicker burned into my mind.
>Most radio SETI projects process data in near real-time using special purpose analyzers at the telescope. SETI@home takes a different approach. It records digital time-domain (also called baseband) data, and distributes it over the internet to large numbers of computers that process the data, using both CPUs and GPUs.
Definetly something going on here I'm not following.
>SETI@home is in hiberation. We are no longer distributing tasks. [0]
Is this paper really old or something? I would love to turn on my clients again :D
They went into hibernation, in terms of accepting new inputs, several years ago. They had more data than they could handle and switched to just analyzing existing data and final reports.
With the final analysis of this project complete, I do wonder if there's a way to bring it back with distributed agents doing the part that was so time-intensive for researchers that they had to kill it.
Immediate nostalgia activated. I ran this on a Pentium machine (I think) at home, still living with my parents. Sometimes I yearn for the optimism and relative naïveté of those times.
Pentium 3 in a crusty Compaq with a 5.25" bigfoot hard drive.
Those were the days
I still have both the screensaver and the moment that I realized that disabling the screensaver allowed processing to happen meaningfully quicker burned into my mind.
This paper describes the front end of SETI@home and provides parameters for the primary data source, the Arecibo Observatory
Most of this data was recorded commensally at the Arecibo observatory over a 22 yr period
Interesting as Arecibo collapsed in December of 2020. It sounds like they have a lot of data to still churn through.
>Most radio SETI projects process data in near real-time using special purpose analyzers at the telescope. SETI@home takes a different approach. It records digital time-domain (also called baseband) data, and distributes it over the internet to large numbers of computers that process the data, using both CPUs and GPUs.
Definetly something going on here I'm not following.
>SETI@home is in hiberation. We are no longer distributing tasks. [0]
Is this paper really old or something? I would love to turn on my clients again :D
[0 ]https://setiathome.berkeley.edu/
The distributed compute part of the project has turned off but data analysis continues.
I know what you mean these types of projects inspired me to contribute as a young citizen scientist.
A different domain, but https://foldingathome.org/ is still running. Using distributed compute to study protein folding.
If you are looking for a good list of these types of projects: https://boinc.berkeley.edu/projects.php
Wasn't this largely solved by DeepMind's AlphaFold?
https://alphafold.ebi.ac.uk/
They went into hibernation, in terms of accepting new inputs, several years ago. They had more data than they could handle and switched to just analyzing existing data and final reports.
With the final analysis of this project complete, I do wonder if there's a way to bring it back with distributed agents doing the part that was so time-intensive for researchers that they had to kill it.