Really cool. This reminds me of Bret Victor's talk where he stresses the importance of seeing the effect of your code immediately (http://vimeo.com/36579366).
It's not clear from the description but it would be great if it could immediately render whatever Core Graphics code I was writing, as I was writing it, much like this in-development tool (http://www.youtube.com/watch?v=JupqhcT4ONY) which popped up in my Twitter stream a couple days ago.
What Bret Victor showed in his talk was quite amazing. It really opens up one's mind to what all is possible and what can be changed.
The Core Graphics video is interesting. Live rendering like that can help developers see what they are actually coding. The whole compile and test cycle just breaks the chain of thought which goes on while coding.
I love the execution of this app, but I disagree with the underlying idea for most cases.
Drawing your UI is never ever going to be as cheap as loading a converted PNG (which Apple's modified pngcrush converts for you). I think a lot of devs have the draw/vector vs. precomposed bitmap tradeoff the wrong way round.
Drawing all of your gradated UIButtons with CoreGraphics methods is a false economy compared to just loading a stretchable PNG. Almost all of Apple's UI system imagery is bitmap based, and for a good reason.
Not sure I can agree with you. I think vector drawing can be better in both size and drawing speed.
It depends a lot on the representation. Bitmap will not get a lot better than what we've got at the moment with JPeg or Gif. Vector representations can vary wildly in compactness and rendering speed. Eg. SVG bad, CSS gradients good. SVG bad, EPS/PDF good.
Obviously drawing from CPU can be much quicker than reading from disk. And if you draw into the GPU buffer first you get all of the advantages of GPU acceleration later.
And it can be much more bandwidth efficient.
Take streaming a bitmap over http. You have to take the penalty of the PNG header, the penalty of assembling a data:url and a 33% fixed penalty for Base64 encoding the binary bits.
So to stream a 1x256px gradient sprite there is a shedload of wasted bits.
If you're in vector land you can just send the gradient endpoints.
If you're not in WebKit land or with pre-sent vector algorithms (rare) you also have to stream the vector drawing code. In that case you can ammortize the cost of the drawing code with a simpler/smaller wire representation.
Using something like the Apple-provided CGLayer, you can draw the image once, then use it like a bitmap while it is in memory, AND they can keep it cached on the graphics card. These guys could easily produce code that does that.
In practice, CGLayer is not some pixie dust that magically makes your drawing fast. It helps only in a specific instance: when you are drawing repeated instances of the same content, into the same context. Otherwise, it ends up being extra complication for no benefit.
(Note that CGLayer is not related to CALayer or CGTransparencyLayer -- they are absolutely separate things. CALayers are incredibly useful in practice; CGLayer was kind of a dead end.)
What about localization? That combined with what others have mentioned below adds up to a lot of imagery.
There's also no reason you can't build your images the first time they're used, and save them out to PNG files for successive uses (if profiling determines that it is indeed more optimal). Even if you end up using graphics at runtime, you'll save work by using code to generate your graphics.
For instance, simply write utility programs to generate all of your graphics with code -- this is definitely faster than photoshop when you have buttons in lots of different languages. It's also useful if you want to use a font where it's not legal to embed it. (You can generate all of your button imagery using code and localization files, then remove the embedded fonts and use the generated PNG files in the release version.)
"While some of the apps have received additional features, it seems likely that the increase in size is mainly down to the huge graphics needed to fill the new iPad's 2048 x 1536 Retina Display. It's worth remembering that these are only download sizes, and once installed the apps may be even larger. Regular apps will likely receive a similar bump in size once developers update them with hi-def graphics"
I just don't understand how they could possibly be five times larger. There are 4x as many pixels, so by straight pixel count it could be 4x larger.
However, current PNG compressors are really good at optimizing flat color areas, gradients, and lines. It seems to me like most images would get only a relatively small increase in size due to this upsizing (1.5x-2x).
iOS devices can render Infinity Blade smoothly. I don't think few shapes (probably automatically cached as a bitmap) are going to be an issue.
And I don't get why Apple bothers with CgBi PNGs, since endian swap on ARM is a single cycle instruction, and it's probably free anyway while moving the bitmap to the GPU memory.
I don't think the Core Graphics vector stuff is in any way GPU accelerated, so the Infinity Blade comparison isn't appropriate.
The SoCs in iOS devices are shared-memory systems, so system memory is GPU memory.[1] But you're right, the endian swap could easily be done in the PNG decoder.
[1] I have to admit I'm not sure it's implemented as zero-copy through to userspace, though. The normal, 20-year-old GL texture APIs can certainly only be implemented via copying, and I'm not sure if OpenGL ES has anything like mappable texture pixel buffers. It would make a lot of sense though, considering how memory-starved embedded systems are.
So basically we stop using graphics and start using code (UIViews) to draw the images.
Does it do caching? A quick search for cache didn't turn up anything. If we have learned anything from using UITableViews with custom UITableViewCells then it is that drawing is expensive.
The underlying idea of generating your images "programmatically" is not a bad one to start but performance wise, if the image (View) is going to be static, rendering into a bitmap and using that subsequently is the more performant solution.
Maybe the tool we are looking for is not one that generates UIViews but let's us just get the images correctly name and at the right resolutions with a click? Like automatic slicing and exporting at predefined resolutions with predefined naming schemes. I just think this would better be done off the device.
For interactive views (drag, move, ...), where redrawing is necessary, this could be a valuable tool -- if it can do that.
When a uiview renders, it does so to a layer. Calayer caches its bitmap content. So, moving something on screen doesn't require redrawing. Most implicitly animated properties (opacity, transforms, etc) won't require a redraw. As mentioned below, it's fairly trivial to save a context to a png file if you must.
This highlights one of the main problems with selling dev tools on the App store - it looks amazing but there's no way I'm shelling out $80 without trying it first.
Not sure what you are accusing Apple of ... This developer does have a choice you know. He could have made a trial available on his own site. Apple does not disallow that.
When all the customers are going to the supermarket to buy their fresh veg, you can set up your own puny little stall in the alley, but guess what, no one's buying because they can't even find you.
Apple does not allow you to sell iOS apps outside its app store, and there's every reason to believe they are pushing the desktop market in the same direction. Why wouldn't they?
> Apple does not allow you to sell iOS apps outside its app store,
This is incorrect in a minor, but fundamentally important way, I think:
Apple does nothing to prevent you from selling outside their store. What they've done is make it difficult for the average iPhone owner to buy from you.
You need to target a specific demographic that can overcome those obstacles and will pay enough to make it worth your time.
Just what I was looking for. Seems like that if these guys also implement the same feature (probably in a future build) it might help many like me out.
I made an open source project last year called Mockdown that does the opposite of PaintCode. It takes declarative design code and converts it to PNGs so they can be used in iOS apps. The original idea was to do mockups but it actually turned out to be more interesting as an open source design tool.
Here's a short 5 minute walkthrough in designing the Mail app in Mockdown:
Windows MetaFile contain(ed) serialized GDI graphic calls to generate the picture in a resolution independent way, while still working as a vector file format that other application could parse.
Generating static ObjC seems like a dumb idea except in very specific cases.
PDF is actually serialized CG graphic calls. This app should produce PDF files rather than raw CG graphics code. Of course, there are lots of apps that can produce PDF.
Craig Hockenberry (http://twitter.com/chockenberry) - Got an email from PaintCode's support: a demo and tutorials are coming. Replied with some friendly advice about releasing products…
We've prepared 4 short video guides demonstrating PaintCode in action.
These guides show various basic drawing techniques at: http://paintcodeapp.com/examples.html
Examples
You can also find examples of PaintCode documents at the same page.
Some of the examples include simple Xcode projects.
These projects utilize the drawing code generated by PaintCode.
Seems like a nice tool at a high-but-fair price point. I look forward to seeing some deeper reviews, because the Illustrator trial I used to design my app icon expired a long, long time ago. :)
I bought the app and just came by to drop my 2 cents.
The good:
It's great. I've been looking for something like that for a while. I wish XCode could do that. If you are a serious iOS developer you will love this tool. Very useful.
The bad:
Some of the code cannot be easily used, specially if you need to manipulate positions in runtime. It should use relative positions, where 0 is "somewhere" within the object. The code generated may cause some UIViews to blur, sometimes you need to round x/y.
The ugly:
The price is not aligned with apps at the same level of complexity.
The idea is cool, but I must say the app itself gives me the same feeling I get from Microsofts Expression Blend app. Or any Silverlight app for that matter.
Other than dynamically-related colors, is this much different from designing in Illustrator and exporting to PDF, which I understand is well-supported in iOS and OS X?
Code generation seems to be the selling point. I've never seen wel executed GUI code generation however on any platform. Windows Forms was especially awful...
It looks like to biggest extra is the ability to insert variables into the generated, so you can make e.g. a blue and orange button from the the same output.
Sometimes ideas are just floating around, aren't they? I was thinking of very similar things lately as well. Obviously this application must be under development for quite a while already.
One thing that I'm not totally sold on it that fact that it generates code though. It is not going to be the running application which you interactively tweak. Because that is what Bret was showing in his talk. And that is also what you would be doing in Smalltalk. I'm not sure this is as 'immediately connective' (to use a variant of Bret's language) as it seems.
Another approach to go about this would be to store the 'drawing' in a data structure and have a special view that takes it and puts it into an image (or on screen immediately). That would also open possibilities for caching, something that likely would be make sense, as other commenters have mentioned as well. Such an approach would come dangerously close to implementing/reinventing SVG, though. Another benefit of such a data structure would be that it could have some simple, textual representation. JSON maybe? Even XML would do. And that would greatly help with version control and diffing.
I don't see why such a feature could not be added to this app.
I'm already doing all this with python scripts for Inkscape - draw the entire UI in Inkscape, name things in the DOM appropriately, run a few scripts -> MOAI code (Lua) or C Code (SDL_svg).
Haven't quite got it ready for launch yet, but rest assured the idea is not new. There have been Inkscape exporter tools for this sort of thing for years, you just have to know where to look (inkexport, &etc.)
You could just create you assets in Illustrator/Vector and you would be set for any future resolution increases. However the PNGs would still increase your bundle size.
What I'm curious about, maybe someone here has the answer is if there would be a significant performance gain from using Core Graphics over just loading a PNG?
On the Apple developer web site you can watch the WWDC videos, there is one on graphics and animation that talks about just that.
Conventional wisdom, especially for small images, it is much more efficient to draw them with Core Graphics than use a image, more so if there is transparency in the image.
This will even be true for non-ios systems. Microsoft has said for years, that generating a 16x16 image is faster than getting it from disk (depending on complexity and disk speeds).
Plus, code does tend to be more compact (again, within reason), so it is less that has to be loaded when your application starts.
This is awesome. I've been waiting for this, and it was on my list of projects to build. However, it looks like the execution is really well done. I would love to see a trial version though.
It's not clear from the description but it would be great if it could immediately render whatever Core Graphics code I was writing, as I was writing it, much like this in-development tool (http://www.youtube.com/watch?v=JupqhcT4ONY) which popped up in my Twitter stream a couple days ago.