The fact that tacking on the phrase “unreal engine” to an image generation prompt makes the image way better is both hilarious and kind of terrifying to me.
In the first generation of AI there was this idea that we could use AI to understand human consciousness; just make a conscious machine and then analyze it, pause it, rewind it, etc. The more we learn about AI, the more I think we’ll lose any chance to understand them long before they pass a complexity threshold where we start debating if they are conscious.
Communing with AI is starting to look more like some kind of Eldritch incantation…
It makes perfect sense to me – images that appear with the words “unreal engine” will tend to be a product of dedicated work to make good looking pictures by working on or in Unreal Engine. Compared to an arbitrary mass of images, these will probably tend to be visually interesting, have perfect color and exposure, and have a useful homogeneity of style.
Interesting how much the people with boots on the ground and no PR departments to filter them actually do seem… pretty aware of how much we’re all going to die? Or at least they seem aware the probability of igniting the atmosphere is over 50% and not nicely contained in the distant future. Something of an update there for me.
Well, our teacher taught us that we live in the world "Beyond the Reach of God" so we'd better be aware of what the stakes are. ;)
With that said, my optimism about alignment was slightly increased lately given the research about prosaic stuff. Also, all key members of EAI are very alignment-pilled and will probably focus on that part much more in the future especially if scaling continues unabated.
This is far-fetched sci-fi problem invention. The only real danger of AI in the next 1000 years is in things no one in the field is seriously addressing: use of AI in things like law enforcement, trained on bad data, to accelerate and justify existing systemic biases.
Not that I disagree, but it's not just a danger of AI, it's a danger of complex self-optimizing systems in general.
We've got a pretty robust complex self-optimizing system already, which goes by names like "society" and "the free market". You point it at bad data and it will reinforce that data. If demographic A is poorer than demographic B, market forces will determine that giving loans to demographic B will get you better risk-adjusted returns, causing demographic B to become even more relatively prosperous, justifying future decisions to prefer giving loans to demographic B.
Which is to say that, while I think it's good for folks thinking about AI alignment to worry about this problem, all of us should worry about this problem.
What they did is cool and impressive but that's an extreme oversimplification.
Building a small version of something that already exits, has code you can look at (gpt2), paper etc. is going to always be cheaper and easier even if we ignore that OpenAI work on and publish more than gpt. This is before even mentioning that they have an enormous amount of free compute from Google (and others) while Azure credits is a lot of what the 1bn investment to OA included.
GPT-J is trained on TPU VM generously offered by Google as part of their TPU research cloud. It’s a small version true but the results are almost on par with reported GPT-3
It's more that, if "amateurs"/volunteers can replicate this, it means OpenAI doesn't have much of a moat.
(OK, I admit that it wouldn't have been possible without Google's donated TPUs. But the cost isn't so outrageous that it's out of reach for any reasonably funded startup).
This is clearly true, but this is also far cheaper that we could get this capability if we were to really staff a project.
If there are volunteers who are capable of and want to contribute to the information commons and are merely constrained by compute, we should clearly help them.
> so a few dedicated nerds wanted to see how far we could get with that. In all honesty, we didn’t actually expect to get very far, but it was the height of a pandemic and we didn’t exactly have anything better to do.
Think about it: the pandemic caused many people to stop at the same time and for a long time. GPT-J took about 10 months to be released. It may not have been possible in a different context.
This is super interesting, the way its worded, I honestly expected it to be for research scientist or ambitious projects like EluetherAI. Really appreciate the HN comment and blog post for that perspective!
Turning down hundreds of thousands of dollars of cloud credits because of vendor lock-in is extremely shortsighted for the overwhelming majority of projects.
I think it is extremely LONG-sighted: an individual research institute might save $100k right now, but the lock-in might force 10x pay in 5 years to continue research. Not to mention it might affect ability of smaller institutes and individual researchers to enter the field, which is also worse for research as whole.
If Google jacks up the price of TPU or terminate the TPU usage because they don't like you, you're screwed. It's quite a high risk for commercial product companies.
I think they do ask you what you are planning to use the resources for, what your qualifications are and make a decision to allocate and how many tpus based on that.
Yes, they do but it's not really a grant and they don't really follow up on what you do unless you contact them yourself. You just fill a form and if you get accepted get an email later with access. At least that's how it was when I got accepted but things might've changed since last I've heard.
Personally I have to say the sheer amount of memeing and editorializing here is just very distracting. I mean I appreciate that stuff as much as anyone who grew up in the 2000s but some blog posts really dial it up to 11 and become almost unreadable.
In addition to the sibling comments translation, the comedy comes from the image itself.
The person is doing an incredibly exaggerated sporting pose, wearing actual gym wear, as if he was operating some incredible piece of exercise equipment for cardio.
Actual "pedalos" are those things your elementary school would get out once or twice a year for festivals for the kids to have some fun.
They're incredible awkward and hard to get moving, to the point where it's more of a coordination exercise for small kids. You certainly can't get any speed on them, so the idea of someone using a pedalo to go faster than walking is ridiculous.
This makes the fact that there is an adult posed like this on a adult-sized pedalo basically saying "gotta go fast" hilarious in so many ways.
In the first generation of AI there was this idea that we could use AI to understand human consciousness; just make a conscious machine and then analyze it, pause it, rewind it, etc. The more we learn about AI, the more I think we’ll lose any chance to understand them long before they pass a complexity threshold where we start debating if they are conscious.
Communing with AI is starting to look more like some kind of Eldritch incantation…