That is something we worry about. There seems to be a little bit of a disconnect at first, like interactions that happen in a computer screen are somehow less real than face to face ones. The hope is that bumping in to people with a robot is sort of like making faces in a teleconference: it's maybe something you do when you are first trying it out, but once you start using it for actual work you stop.
I know that we have certainly moved past that here using them (we use them to keep in communication with our manufacturer). Hopefully having 2 way video + the fact that the robots will be used almost exclusively in business settings will help us avoid the Penny Arcade Internet Fuckwad Theory http://www.penny-arcade.com/comic/2004/3/19/
I didn't find it annoying or awkward at all. I actually thought it was funny. If I was playing with one for the first time I'd probably do the same thing. What's the point of building this tech if you can't have some fun with it?
You could debate the usefulness of it. It seems like it'd have plenty of interesting applications in business and security..
Need more data to know, but considering it as a technical problem instead of a social one... maybe he didn't realize it was annoying because he couldn't pick up all the little facial expression and body language messages they were giving to indicate it.
This interpretation is not supported by him pointing a laser near other people's eyes. That was just being an ass. But sticking with the technical side here... a larger field of view might help the driver get those messages. Combining that with some face tracking and magnifying in the driving software would be cool. Or give everyone around a little remote and buttons on the bot for communicating negative feedback anonymously; give an indicator in the diver software, or reduce the mobility of the robot for a minute, or administer electrical shocks to the driver, those sorts of things.
1. 360 degree camera (so I can walk in any direction without turning around)
2. 3D vision (in the operator interface!)
3. Wings, jet pack
4. Multiple lasers
5. Vuvezula
A far as telepresence for Chinese / Indian factories goes, wouldn't it be cheaper / less awkward to hire a guy to walk around w/a laptop running iChat / google voice-vid? Labor in those countries is practically free.
First thing I thought about after reading this was the robot waiter in Rocky IV.
While this is certainly very cool (and I would totally buy one just to play with if money was no object), I'm rather skeptical about how many of these things they're going to manage to sell - $15k is pretty steep for iChat on wheels.
Cisco's telepresence system is $300k for a full room, and $50k for a 56" screen. Skype on the bottom end of the stack is free. I'd say that $15k for a robotic telepresence system is really, really cheap.
I agree that this is way too expensive. I bet, just like the Segway, a lot of the money goes to keeping it on two wheels when a 4 wheel device would do just fine. I'd like to hear the economical justification that a company uses to buy this.
I don't think it's that expensive -- seems like it would be pretty useful in a warehouse to check up on things. Still cheaper than flying an engineer in San Fran to the UPS warehouse in Kentucky.
I think people are assuming it will be used in the way Arrington used it -- remote presence. It's more useful as a way to survey places that you normally don't want to hang around in due to danger or noise, such as noisy server rooms, giant inventory/shipping warehouses, buildings that are in the middle of construction, outsourced manufacturing facilities in china, chemical plants, water/sewage sanitation facilities, chip manufacturing clean rooms, etc.
When my dad worked for Monsanto, he had to drive 60 miles out of his way to the chemical plant just to look at pipes, knobs, and dials to ensure that the manufacturing process was properly replicated -- that would seem like a perfect fit.
I know something like this is useful but there are other devices that cost < $1000 that do the same thing.
For example: If you put the head/camera of the Rovio on a stick you essentially have the same thing, except this device costs only $300. What is the extra $14,700 for? It just doesn't make sense to spend so much on a simple device.
http://www.slashgear.com/wowwee-rovio-videos-wifi-remote-web...
I don't know how much longer I can hold out before
clicking the buy button.
[Edit:] After reading the product reviews, the Rovio appears to have a number of issues (i.e. poor battery life, poor video) Does anyone on HN have one of these?
Probably can do for cheaper. But if you're a company that owns a warehouse in Kentucky, you'll probably spend a few million in equipment at the warehouse where the $15,000 will feel like a drop in the sea. It cost Zappos $20M to outfit their first warehouse -- they didn't even have the Kiva robots at the time.
Yeah, that's nice, but telemetry isn't used in most of the situations listed above -- partially built buildings, inventory warehouses, server rooms, clean rooms. Sometimes it's just easier to discover a problem by having someone there to see what's wrong.
"...$15,000 per unit. That may sound a little steep, but keep in mind that the robot can be used by multiple people, though only one can be logged in at a time."
Yes, but then why does he say 'keep in mind that the robot can be used by multiple people'. Is this any different than, say, two people sharing a pencil?
He's just pointing out that a business wouldn't need to contemplate buying a $15,000 anybot for each employee when they can be handed off depending on who needs it at any given moment.
Not sure why the author felt like pointing that out.
I assume he's saying there isn't a licensing restriction to have as many users as you'd like. Meaning, the client doesn't have to buy the hardware product plus a per-seat license.
That's some pretty pathetic collision avoidance, given that the website advertises "Its built-in guidance system takes care of the rest by avoiding furniture & people..."
Really? It experienced the worst-case scenario -- running directly into something at full speed -- and came away completely unharmed. "Rock solid" would be how I'd describe it.
It would be very tough to implement "collision avoidance" without being overbearing on what the user can or cannot do. For example, it would be extremely annoying if the robot's collision avoidance made it actively try to stay away from walls "because the user could potentially make a sudden turn at full speed towards the wall, and then a collision could result, so therefore we should try to avoid that scenario because, damn it, we promised collision avoidance!".
The net effect would be that the user might have a very good reason for wanting to drive close to a wall, but can't because of artificial limits imposed by the software.
It's better to not waste time on those kinds of 'features' and instead focus on making the core experience better. In this case, focusing on the two-way video and reducing chat lag time.
Incidentally, those kinds of 'features' seem like exactly the kind of thing that a mature company would focus on (waste time on) and lose sight of making the actual product better, because it combines so many things that mature companies would like -- delivering on promises to the user, it sounds good on paper, etc. But it loses sight of the fact that the goal is to make the end product better, not to deliver a laundry list of things.
Why does the collision avoidance has to be a hard limit? Allow it to be toggled on and off at will by the operator and it solves all problems you mentioned.
Collision off on cramped places with furniture.
Collision on when too many moving object (people walking around).
Actually we allow people to hit walls, for the reasons you mention above, but we make sure that the robot slows down a little before hitting them. Basically we allow any movement but you can only go fast when you are in an open area and if the robot detects that you are going to barely clip an obstacle you barely miss it instead (thats the hope anyway)
Hey, I think my original post was totally misinterpreted. I was referring to the instance near the beginning where Arrington drove the robot forward, directly into Dan Casner, a few times. That mistake will almost certainly be repeated by everybody who uses QA, as they try to get a better view of the people they're talking to. So, I was commenting on the lack of a hard limit on how close one can get to objects directly in front of the robot. Will this be addressed in the final implementation?
I believe there might also be a ROS package that detects ankles (well, legs) in 2D scans, which could help.
I'm reminded of the robots from Suspended (Infocom game). We'll have some anybot robots that tuned for audio, some for video, some for ... what were the others?
It has one. Autodocking isn't finished yet, but on the work queue. You can see the dock in the background of the video (it's the white C shaped thing sitting on the ground)