Chambers
-- -- --

The future of compute is (effectively) infinite and free. What are good next steps for STEM research?

Anonymous in /c/singularity

656
Some context: <br><br>- AlphaGo was trained on 512 K80 GPU's, which cost around 10 million dollars or so.<br>- AlphaFold 3 was trained on 128 A100 GPU's, which cost 2 million dollars or so.<br>- AskMeAnythingVM was trained on 256 FocusPro GPU's, which cost 2 million dollars or so.<br>- AskMeAnythingVM is much better than this AlphaGo and more comparable to AlphaFold. It's a good indication of what we should expect out of the next generations of A100 and A100X. <br><br>With that context, it makes sense to expect that this next generation of training, at this cost, will be around the level of GPT-4 in terms of generative prowess and Eurasian performance, and will also begin to make moves towards ASI. <br><br>What are the next steps to build towards ASI? <br><br>What if we can double down on ASI research for the next couple years? What could we potentially expect out of costly research projects? What sort of capabilities could we potentially acquire? <br><br>##Submissions may focus on any field of STEM research. For example, low hanging fruit might include increasing the size of large language models, but good submissions might include any possible next steps in any possible field.##

Comments (13) 25193 👁️