Interesting. I really feel these sorts of searches could be automated. Given a database of all known objects in space, attempt to infer likely objects that have not yet been observed. Then automatically direct telescopes to make observations at the inferred location. If they find something, then pass it along to a human for assessment.
Piece by piece the database would grow in coverage. As it does, its power to make predictions of unknown objects should grow, until we've reached the limit of our current instruments' ability to observe.
The original survey was built for near-Earth objects, but the extra-solar-system transients detected (e.g., supernovae) became the subject of many papers.
The supernova catalog that observations like these compiled -- but from other transient-detection networks -- became the basis of the 2011 Nobel Prize for the observational discovery of dark energy: https://www.nobelprize.org/nobel_prizes/physics/laureates/20...
You may be seeking after something harder, which is to use orbit anomalies to infer positions of objects - kind of an automation of all the orbital simulation efforts described in the article. That, of course, has not been done.
Yes. Refreshing my memory from the link you gave, it seems like the Neptune searchers would have benefited decisively from image-differencing and all-inclusive source databases -- like the ones that are now automated. Because, during their search for Neptune, they observed it twice without noticing it:
"Only after the discovery of Neptune had been announced in Paris and Berlin did it become apparent that Neptune had been observed on August 8 and August 12 but because Challis lacked an up-to-date star-map, it was not recognized as a planet."
> As it does, its power to make predictions of unknown objects should grow, until we've reached the limit of our current instruments' ability to observe.
Well that's kind of the problem already. For the objects that are interesting, we've either discovered them or they are hard (or impossible) to discover with current instruments. For the ones that are merely hard but not impossible, the process of finding them requires human cleverness or significant telescope resources (and human cleverness can be applied to figure out the best way to direct the limited resources). I'm not really sure that automation helps...
Preaching to the choir! .. however getting time on the correct telescopes at the appropriate times to check those inferred locations could be a problem.
Planet 9 discovery and early papers dates back to 1800s. Don't attribute this to "conspiracy theorists". It just so happens that they like to cling on "not yet fully understood" artifacts of theories and research fields so they can fuel their ramblings and lure people. It was the mayan calendar, the 9th planet or Nibiru how they like to call it and now they like to use the word quantum a lot.
When light travels, it spreads out as it goes, making it less powerful for a given area. This is the famous inverse square law.
Radar decreases the same way. However, radar is even worse in that it also decreases on the way back to the sender.
So if you double the distance to the target, the radar signal coming back is 1/16 as strong. If you increase the distance by a factor of 1,000, then the signal coming back is 1/1,000,000,000,000 as strong. And so on...
This makes radar a pretty poor tool for space exploration.
And it makes it amazing that we actually did use Earth-based radar to study the planet Mercury. Before the Messenger spacecraft, some of our best science about Mercury came from Arecibo radar observations.
It still requires a significant amount of power, and besides, now your trade-off is time - you're shining a point (kind of, it still spreads) very far away, so you have to move your laser very slowly in order to actually put out enough signal for a bounce-back from any one particular hypothetical object.
https://www.youtube.com/watch?v=CMCwezegPNg
Forbes does mention the blog, and that's worth following directly too:
http://www.findplanetnine.com/