They actually need the entirety of the backside of the panels to cool them - if not they would literally burn out from the accumulated heat from being exposed to the sun.
The crux of it is that radiative cooling in space is dependent on exposed surface, not total surface area like a radiator or heatsink on earth - this means if you just put an existing finned heatsink in space it would radiate most of it's heat back to itself - the only net cooling you'd have is the amount that can radiate outwards without hitting itself.
For a benchmark - the IIS uses about 4500sqft (420 sqm) of radiators just to keep it's onboard equipment (~70KW) cooled. That's ~150-200 W/sqm.
That means, per GPU, you'd need about 2.5-3.0 sqm of passive radiators.
For a 1MW satellite (~8 datacenter racks of GB200/NVL72) you'd need basically half a football field of bleeding-edge solar panels (that also need to radiate their heat on the reverse side) and a similar sized cooling array of heat radiators for the electronics.
This is on the scale of 40-50 tons - about 10% of the IIS. This should fit on falcon heavy or starship - assuming the solar array and radiators can fold up to fit inside. You could, purely based on weight, launch 2 of these per starship launch.
If you consider the Opex savings (electricity, rent, facilities maintenance) and putting 2 of these on a single starship launch, I still think the ROI would be too long. You're saving about ~$1M per year in Opex but it's costing you $25M to launch it into space and likely an extra ~$50M in satellite equipment (based on starlink satellite costs) on top of the compute. Will those GPUs still be useful in 10 years? Probably not.
I don't think the math is there that justifies the free electricity - even at gigawatt scale (thousands of satellites mass-produced) and at a dramatically lower cost per satellite and per launch. Getting costs down on this would involve tightly integrating the compute and the satellite hardware which would make upgrading the compute independently from the cooling/power infrastructure in the future a significant challenge.
The lifetime of a GPU in practice is around 3 years. There is no way this plan is going to work. Musk knows that, he's just counting on stupid people to buy into the SpaceX IPO on hype.
Back in the early 1990s I read some children's futurist book that suggested that we might send solar panels to space that would then beam energy via microwaves to the surface. The book was more fantasy than science, so I took it with a huge grain of salt and appreciated it for its entertainment value.
But do you think schemes that try to direct solar energy to the surface are more practical then running datacenters in space?
I don't think there is calculus that makes the cost of getting it to space worth putting it there - Elon of all people knows this - if you scale up the ground-based solar and add battery storage, the costs are still far lower than trying to put that in space to gain more hours of usable sun and higher intensity sun... it's just not cost effective until panels are paper thin, weigh nothing and the cost of launches and thus per-ton to orbit gets an order of magnitude cheaper.
Space based power arrays with microwave transmission to massive ground fields has been discussed for nearly sixty years. It doesn't make economic sense, at all.
You can extend this by looking at the IP route for the reverse path, I've found it's usually accurate to the state at least on the last hop before destination - added benefit that there's usually an airport or city code on the fqdn of that hop.
In theory it's more secure. Containers and VMs run on real hardware, containers usually even on the real kernel (unless you use something like Kata). WASM doesn't have any system interface by default, you have full control over what it accesses. So it's similar to JVM for example.
Everyone keeps saying that but I’ve found it to be incredibly weak in the real world every single time I’ve reached for it. I think it’s benchmaxxed to an extent.
I find Gemini does the most searching (and the quickest... regularly pulls 70+ search results on a query in a matter of seconds - likely due to googlebot's cache of pretty much every page). Chatgpt seems to only search if you have it in thinking/research mode now.
Scientific research and proof-reading. Gemini is the laziest LLM I've used. Frequently he will lie that he searched for something and just make stuff up, basically never happens to me when I'm using gpt5.2.
The way I summed it up to a friend recently is that Gemini 3 is smarter but Grok 4 works harder. Very loose approximation, but roughly maps to my experience. Both are extremely useful (as is GPT-5.2), but I use them on different tasks and sometimes need to manage them a bit differently.
Maybe they messed something up in the official interface then. I've heard that the PDF processing capabilities are also significantly worse in Gemini UI compared to using it through the API or Google AI Studio.
Any coding task produces some trash, while I can prototype with ChatGPT quite a lot, sometimes delivering the entire app almost entirely vibe-coded. Gemini, it takes a few prompts for it to get me mad and just close the tab. I use only the free web versions, never agentic ‘mess with my files’ thing. Claude, is even better than that, but I keep it for serious tasks only, so good it is.
Gemini loves to ignore Gemini.md instructions from the first minutes, to replace half of the python script with "# other code...", or to try to delete files OUTSIDE of the project directory, then apologise profusely, and try it again.
Utterly unreliable. I get better results, faster, editing parts of the code with Claude in a web ui, lol.
5.2 is back to being a sycophantic hallucinating mess for most use cases - I've anecdotally caught it out on many of the sessions I've had where it apologizes "You're absolutely right... that used to be the case but as of the latest version as you pointed out, it no longer is." when it never existed in the first place. It's just not good.
On the other hand - 5.0-nano has been great for fast (and cheap) quick requests and there doesn't seem to be a viable alternative today if they're sunsetting 5.0 models.
I really don't know how they're measuring improvements in the model since things seem to have been getting progressively worse with each release since 4o/o4 - Gemini and Opus still show the occasional hallucination or lack of grounding but both readily spend time fact-checking/searching before making an educated guess.
I've had chatgpt blatantly lie to me and say there are several community posts and reddit threads about an issue then after failing to find that, asked it where it found those and it flat out said "oh yeah it looks like those don't exist"
Much like talking to your doctor - you need to ask/prompt the right questions. I've seen chatgpt and gemini make one false assumption that was never mentioned and run with it and continue referencing it down the line as if it were fact... That can be extremely dangerous if you don't know enough to ask it to reframe or verify, or correct it's assumption.
If you are using it like a tool to review/analyze or simplify something - ie explain risk stratification for a particular cancer variant and what is taken into account, or ask it to provide probabilities and ranges for survival based on age/medical history, it's usually on the money.
Every other caveat mentioned here is valid, and it's valid for many domains not just medical.
I did get hemotologist/oncologist level advice out of chatgpt 4o based on labs, pcr tests and symptoms - and those turned out to be 100% true based on how things panned out in the months that followed and ultimately the treatment that was given. Doctors do not like to tell you the good and the bad candidly - it's always "we'll see what the next test says but things look positive" and "it could be as soon as 1 week or as long as several months depending on what we find" when they know full well you're in there for 2 months at minimum you're a miracle case. Only once cornered or prompted will they give you a larger view of the big picture. The same is true for most professional fields.
The one thing a real doctor can do is actually touch the patient and run tests, even simple things like using a stethoscope. At best, an AI "doctor" is just comparing patient-provided symptoms to a lookup table of conditions. No better than what WebMD used to (still does?) when you would answer a questionnaire and be provided with a list of conditions ranging from a cold to the bubonic plague. AI loves taking everything you say at face value, it doesn't know how to think critically. Whole doctors shouldn't think of a patient as an adversary, they often lie or unintentionally obscure symptoms or the severity of symptoms. Even the most junior doctor can provide a more thorough examination over the phone or through chat than an AI that believes everything it hears.
I remember trying to talk to WebMD when I had pain in my side and appendicitis was near the bottom of the list, the top stuff was either nothing serious or highly improbable. The pain didn't seem as bad as what the appendicitis pain should have been based on descriptions. My mother got her doctor to call me and he walked me through some touching and said "you likely have appendicitis, don't talk to WebMD next time." I went to the hospital that night and that doctor told me I was likely hours away from a burst appendix. I can only imagine what nonsense ChatGPT would have told me.
Just like with most professions, the real world is nothing like the textbook. Being able to pass a medical exam doesn't necessarily mean you're going to be a good doctor. Most of the exam is taken during med school and the final portion is only taken after the first year of residency. They still have another few years at least of residency after passing USMLE and that's with supervision under an attending doctor. Being able to pass the USMLE is not equivalent to being successful doctor with years of experience.
reply