Honestly I thought I was going to see some visualisation of a 1px high image vs. various other long things. E.g. " At a pixel density of 330ppi this image would be longer than 1000 Starship Enterprises* ". Alas, no.
> What a 600,000 megapixels wide picture looks like (oddly-even.com) 23 points by lukedeering 1 hour ago | flag | 12 comments
Megapixel: I do not think it means what you think it means.
> A megapixel (MP or Mpx) is one million pixels [1]
600,000 * 1,000,000 = 600,000,000,000
> The largest photo ever taken of Tokyo is ... 600,000 pixels wide [2]
We have 600 * 10^9 pixels, at 330ppi resolution this would correspond to 1.828 * 10^9 inches, or 4.618 * 10^7m, i.e., about 46200km.
Now, there are different Starship Enterprises of somewhat varying lengths, but let's take NCC 1701-D, because that's the first one I found data on. Wikipedia puts its length at 642.5m [1]. Thus, it ould be about 71880 Starship enterprises for the given length (not accounting for any other dimensions of course).
More interesting, it's a bit more than the circumference of Earth (~40000km).
I'm really interested as to why you two consider a fictional space ship a good "underbelly feeling" measure of size. I like Star Trek, but I have absolutely no feeling for the size of the Enterprise. I'd believe it if you said it was 3km long, but also if you said that it was 400m long.
Not meant as a harsh criticism though, I like the math and the geekiness :)
Personally, I'd say I have a better feeling for "starship enterprises" or "battroids" [0] in terms of length or height than the ubiquitous "football fields" or "empire state buildings".
Both those fictional measures are things I've visualized being inside of or next to countless times. I can't say the same about the other two.
I remembered it has around 700m, but didn't remember the exact figure. You could probably also ask either of us about the inner workings of transporter or the warp field dynamics ;). It's a geek thing.
You spend too much time with something, it feels natural to you ;).
2) it is not 600,000 megapixels. It is 600,000 pixels wide. If you are calculating it as a full sphere, then it's 180 gigapixels. But there is a black space on the bottom so we are calling it 150 gigapixels.
3) Yes we have censored a few things in the image which might be embarrassing to those people involved. Most of those things would not be embarrassing in other cultures (I think) such as a woman hanging laundry, but Japan is "different". Per the request of journalists at Asahi Shimbun who published a story about it there, we covered up a few bits in the photo. Oh, and the guy sleeping on the ground? Well that's embarrassing anywhere. Poor fellow probably did not expect to be famous on the internet as he lay down on the bench to sleep, fell off the bench, and kept on sleeping. (Side note: apparently in Tokyo it is fully acceptable to sleep it off on the street or in a park - it is a safe place! Well, if you're male...)
4) Yes you can use your geek powers to uncover the censored bits and there are already screenshots out there. Oh well. I did not actually see the whole image before publishing it. It is just so big :)
5) Yes, I used a Canon 7D (best pixel density) and a 400mm L f/5.6 lens, because that lens is great, sharp, and fit in my carryon - can't check camera gear on the plane now, can we.
6) I used a Clauss Rodeon gigapixel robot to control the camera. It is still a lot of work to set the speed and so on, don't think that this is a "set it and forget it" kind of thing. The robot is moving continuously, and the camera is focusing and shooting while moving. This is technologically amazing stuff, but it takes a lot of tweaking to get it to work. In this case I was not completely familiar with the equipment, and I made some mistakes. One section of the image was stitched together from two entire sets of images shot on two different days, in order to get a good alignment.
7) panning mode: we use the "original QTVR style" of navigation, which also used in first-person games. This lets you hold the mouse button down and "glide around". I find this vastly preferable to the click-drag-click-drag-click-drag google style. On touch screens, the movement follows your finger which is more intuitive generally. I've seen a five year old navigate these panoramas on an ipad with no problem. On the PC with a mouse there is not really "one right way".
Hi!
Thanks a lot for makeing the Image, truly a great piece!
I have a question, actually concerning the mori tower building: Where did you take it from? From the very top where you can go outside? Problem would be, that you couldn't go to the very edge of the building, or were you somehow able to?
And, do you maybe have a "behind the scenes" video of some sort, or just a few shots showing how you are working :D?
On the mori tower, I shot it from 4 points around the tower. Another commenter sent a link to a panorama where I'm shooting one of the sections. I shot the same on the other sie, and the other two sections were shot from the ends of the building.
Hi nakedrobot2 , I was wondering if there is any way for us to use the technology that you used to make this photograph. Let me know how we can contact you.
First of all, this effort and its result are fanastic!
Question: how close are sensor densities getting to present capabilities of glass in smartphones?
For example, if a 10 gigapixel (100000x100000) sensor existed that would fit behind an iDevice's lens, how much digital zoom cropping would we really be able to enjoy before we hit details that are bigger than pixels but smaller than the tiny lens can pass clearly?
In terms of pixel density: You may have noticed that the "megapixel wars" have pretty much stopped. And even the sensor that I've been using for nearly all of my "world record sized" is the 18-megapixel APS-c (1.6X "crop") Canon sensor (550D, 7D). The 7D is nearly 4 years old! Only in recent months has Canon released a camera with smaller pixels (the 20 megapixel 70D). Why?
One major reason is that lenses can't resolve much more than this. Your 10 year old lens that was built for film might not yield any detail at 100% zoom.
Another reason is that they are hitting limits with the wavelength of light. Correcting "chromatic aberration" on the sensor at this resolution is becoming a major issue. Each RGB sensor on the camera responds to a range of wavelengths, each one of which gets distorted differently. These all get mixed together and it's definitely not possible to disentangle them. So, to some degree, CA is impossible to fix completely.
These are large sensors and large, expensive lenses we are talking about here. With smaller phone sensors and their tiny lenses, the problem is worse. Looking at all phone photos, there isn't really anything interesting at 100% zoom, I think you won't lose any detail at all if you make the photo 70% of its original size.
To answer your question - it will take a great number of breakthroughs before there is anything like a 10 gigapixel sensor, with a lens that can resolve details for such a sensor.
Sometimes in art galleries you see these very large photographs. I'm always annoyed that these don't have all the details. If you look from far enough they look nice, but once you go closer you see just the rasterization pattern.
I would like to see posters made out of these gigapixel images and printed with such high quality that you could actually take a closer look and it would reveal something new.
> ... printed with such high quality that you could actually take a closer look and it would reveal something new.
Last time I was at the NC Museum of Art, they had on display a wall-sized aerial composite photo of the grounds. Next to it was a box full of magnifying glasses and a plaque inviting you to do exactly that.
Emergency entrances for fire / rescue. You are not allowed to put anything in front of those windows, so that firefighers know they can break those windows to easily gain access to a building. They can then be used to help people escape, or get more firefighers into the building.
Pretty cool. Now imagine the tools the NSA has, live satellite feeds and hi-res cameras all over the US and the world with a bunch of Diet Coke drinking zombies clicking, zooming and panning all day long.
Working with satellite data I can tell you that the maximum resolution data you are likely to get from a satellite is 0.25m resolution (i.e. a pixel is the size of a car wheel).
And that's talking about likely military satellites - the public are lucky to get hold of 0.5m resolution.
Also, clouds get in the way. A lot.
For super high resolution imaginary like you are talking about, you need something like the ARGOS system [1].
This image was made by a slowly automatically rotating camera, but the ARGOS system seems to use a whole bunch of high resolution cameras, like a insects eye.
The interesting thing about the ARGOS demo on NPR was the automatic object tracking. The system can track cars, and people, as they move around. You could go back through the data to follow a guy back to his house after he is identified as planting a bomb. Or more HN, posting something from an internet cafe.
I really think that, as technology makes pervasive surveillance possible, we need laws to explicitly limit what the state or companies can do, at least domestically.
It supprises me that people are so much more upset about the NSA gathering data, given that they willingly gave it to ad networks that want to use it to make bad decisions. The highest cost google ads are for loans, insurance, and other services where the provider makes more money if the user makes an inappropriate choice. Imagine if google put a googleX ARGOS on it's Loon baloons, and found some way to monetize it. We really need laws in place to stop anyone doing that.
It's a permissions problem. Imagine the world as one big file system we're all sharing. You 'write' stuff to this file system all the time expecting nobody to come along and use it, but they do. If you limit access to some of this data by making it illegal that doesn't mean they don't still have access to it.
Seems like what we need is a quantumly encrypted coat we can put it on whenever we want complete privacy.
Atmospheric effects. Atmospheric refraction adds significant noise, fully obscuring features smaller than about 0.25 m.
Yes, ground telescopes have ways of cancelling out atmospheric effects, but the required calibration and long exposure is impossible with the orbit of spy satellites.
I don't think it takes long exposure. Ground-based telescopes can make the simplifying assumption that they are looking at static point sources. That makes it easier to correct for camera movement and for optical defects from the lense and the atmosphere.
On the other hand, looking down is a bit easier than looking up because the atmosphere will converge rays looking down, but diverge them looking up. See http://what-if.xkcd.com/32/.
That site also claims that others say Hubble is about as good as the best spy satellites, resolution-wise.
Hubble is of the best spy satellites. It was built by the same team that builds spy satellites, using all the same parts and equipment, tested in the same facilities, etc. It just has a slightly different secondary mirror setup and a very different set of instruments and control mechanisms, since it's meant to point at galaxies and do astronomy, not at the Earth doing surveillance.
Seriously, one of the hardest parts of the Hubble program was figuring out how to do it in the view of the public without revealing the underlying surveillance programme.
Which Firefox version are you using? The website loads fine for me (on OS X) in Firefox 24.0a2 (Aurora), but hangs Firefox 25.0a1 (Nightly). I filed a bug here: https://bugzil.la/901306
javascript:$("*").filter(function(){if(this.currentStyle)return this.currentStyle["backgroundImage"]==="url(http://360gigapixels.com/black.png)";else if(window.getComputedStyle)return document.defaultView.getComputedStyle(this,null).getPropertyValue("background-image")==="url(http://360gigapixels.com/black.png)"}).remove()