How i built fortmcmoney.com – Part 2 – Exploration

In the second post of this series, i will talk about the different techniques and iterations i’ve run through to build the photo experience in fortmcmoney.com. If you haven’t read the first post, i invite you to read How i built fortmcmoney.com – Introduction before reading this one.
About the photo experience
Fort McMoney.com is a virtual representation of a real city in Alberta, Canada, named Fort McMurray. We build 23 locations from photos taken during the 2 years of investigation. These photos were very different from one to another. Shooting outside, inside, at night or on bright daylight, we have to find a way to blend these different pictures into a beautiful immersive experience. The main goal was to feel we were really in Fort McMurray. So, the design team at Toxa made a lot of compositing and image manipulations in Photoshop to create these beautiful panoramic pictures. My job, was to animate these panoramic pictures to create the impression that we are on the real location.
Iterate, iterate, iterate….
The first location the design team provide me was the starting point in the game, the campground. It consists of a 7651 x 1024 background image, 3 characters, 2 objects. Here is a resume of the interactive scenario the creative team asked me to build:
- The image have to move left/right/top/bottom depending on the mouse location.
- Image should not be distorted and no black area
- When hovering on objects and characters, we must blur the background to create a depth of field effect
- Ability to close-up to a specific area of the panorama to display additional informations about a character, object, TV screen or call to action buttons.
- Close-up should not be blurred or pixelated
- iPad compatible
I’ve build 11 iterations during this process to achieve the result you see on fortmcmoney.com. Here is a resume of some of the fails and success i had during this process. Take notes that the interactive scenario was not that clear at the beginning. The scenario also evolved during that process and bring more iteration by the same time.
Moving x and y
The first prototype i’ve had to build, was moving left to right and top to bottom on a 7651 x 1024 pixels image. I have already built panoramic scrolling in the past, but nothing that much big. So, i’ve tried with the most simplistic way to do it and see the result.
Simply calculating the width and height of the image, and moving it depending on the mouse’s position. I was able to achieve 60 FPS on my brand new MacBook Pro, but when testing it on a older computer (a 2006 iMac), the result were not that good. If these result were not good on an older desktop computer, what about mobile? Right here, i’ve known that this was not the way to build it. So, i’ve moved to another iteration.
ScrollRect
Then, i’ve tried to use Sprite.scrollRect. Performance were better, no doubt about it. The problem was to manage browser’s resize to make sure the screen was always filled with image. I’ve tried various code and struggle to make it work, but never figure out how to do it. I’m sure there is a way to do it, but i’ve go to the next iteration as we have to move on.
Blurring on hover
That was a pretty good challenge, since the background image was so huge. I knew, from the start, that dynamic blur was out of the way, especially on mobile. We had to pre-render these blurred backgrounds and display them only when necessary. Once built that way, combining the clean and blur background in a single sprite, moving it on both axis and changing visibility as necessary, i had some pretty good result without any code optimization at the time. I was very happy at the moment, until i tried to code the zoom effect…
Zooming
That was the feature that requested all the extra work to make it work smoothly on both desktop and mobile. I’ve tried 2 or 3 different techniques using classic flash display hierarchy, nothing worked the way i vision it.
Scaling up a 7651 x 1024 image was a pretty bad idea. Performance were horrible, end of the line.
Then, i’ve tried to create a BitmapData of the size of the screen and then Bitmap.draw() my panorama’s Sprite, put it on stage and animate it. The result were very good on my working machine (60 FPS), but not on my testing desktop machine (around 20-30 FPS).
With that in mind, i showed this prototype to the design and creative team. They were not happy at all about the image quality when zooming, and they were right. The image was so pixelated that i look like the photo was taken from a 1MP camera. That was not the kind of image we wanted in this project.
So, back to the drawing board, the design team provided me high resolution image for zooming. I exported these zoom into PNG and import it in Flash to use it when zooming. The performance was as good as the previous technique, but the image quality was far better than previously.
Now, my task was to made it smooth as butter. To do so, i knew i had to use something more powerful than the traditional flash display list.
Something like the GPU.
The power of the GPU
The GPU is a graphical chip available on all desktop computer and mobile device. It can render high quality graphics at light speed. Flash enabled GPU graphics through Stage3D last year. The problem; the code needed to draw a simple rectangle on screen is like trying to learn mandarin. Fortunately, the Flash community develop a simple framework, called Starling, that abstract the Stage3D interaction with the GPU through a hierarchy like the flash display list. If you are involved in Flash development and have never tried Starling, you have to get your hand on it. It’s amazing.
Starling
Here we go, a complete rewrite of my prototype for this version. Since all my previous experiments where based on the traditional flash display list, i had to rebase everything to make it work with Starling’s display list.
Starling has a specific limitation using image. Images are displayed using Texture, which is equivalent to BitmapData contained into a Bitmap. From here, nothing too difficult. Texture could not be bigger than 2048 x 2048, which is under BitmapData’s size limit. So, i had to cut my background image into smaller pieces to make it work.
That being done, the panoramic picture was moving so smoothly in both desktop and mobile, i was so happy. I was using PNG images directly embedded into my SWF. The SWF file size was big but, i knew i will be able to figure this problem later on. What was concerning me more was GPU memory usage. For the campground only, which is a sliced 7651 x 1024 texture the clean background, and same thing for the blurred background, my application was using around 500 to 700Mb of memory! That was huge for a single location, without counting any of the other modules required on the project. It was more than what’s available of most mobile device!
Another problem, mobile device was taking a fair amount of time to upload PNG images to GPU, which was related by a longer loading time. It was not the biggest problem on earth, but something that annoyed me at the moment.
Searching for a way to decrease memory usage with PNG, i’ve found about Adobe’s ATF format for Starling. Bingo, it was the way to go.
Starling and ATF
If you don’t know about ATF format, you have to check it out. Shortly, it’s a compressed texture format encoded for GPU, which resolve into faster rendering, lower memory footprint, faster upload to GPU and more.
ATF textures have some restrictions when you try to encode your image as ATF files. Your image source must be JPG or PNG and your image width and height must be a power of 2. That being said, i decided to cut my panoramic images into pieces of 1024 x 1024. It result into 8 textures for clean backgrounds and 8 textures for the blurred background.
The result were awesome, around 100 to 130 Mb of memory used for the campground by using ATF texture. Compared with the 700 Mb using PNG, it was a huge win.
One thing was missing. How to handle characters, objects and TV screens images efficiently? Spritesheet was the key. By combining all my images used for characters, objects and TV screens into a single PNG, then encoding it as ATF, i knew it will be perfect. So, i used TexturePacker to package all my images into a power of 2 PNG transparent file and then encode it to ATF.
Problem solved 🙂
File size
At that moment, the campground SWF was about 10 to 15 Mb. It was big, but at least, each locations had is own SWF file and being loaded only when requested. And then, Apple drop a bomb on us…
The Retina bomb
One day, i think in March or April, a friend came to me and show me the new Apple’s policy regarding non-retina app submitted after July. All apps submitted to the App Store after July must support retina display. If not, your submission will be rejected. (Sorry if the dates are wrong, i can’t find the press release about it on the web.)
We were not supposed to support retina display for that project. Can’t remember why but it was that way.
That news was a big slap to the face for us. The design team had to redesign every location done at the moment. I think they had completed 7 or 8 locations at that time. For me, it mean multi-resolution assets and conditional assets loading depending on the resolution of the mobile device. Damn it, i had to get back to the drawing board again!
Conditional assets loading
Now that i knew i had to support retina display on mobile only, i exported my panoramic picture as high resolution, which result into a 15302 x 2048 PNG background image, cut it into pieces of 2048 x 2048 and encode it into ATF. I had to do the same process with standard resolution images.
Then, i had to embed all theses ATF files into my SWF. It was working, but the file size was incredibly high. I think it was about 30 to 35 Mb for the campground only. It was too big. The loading time will take forever for low speed internet user.
So, i figured a way to build separate SWF for desktop and mobile experience. Desktop will load location as external SWF and mobile SWF will be embedded into a huge single main SWF and then be compiled to iOS. That plan was looking great, until a hit the wall on iPad.
Embedding assets into SWF on mobile device
I’ve never read of hear of that bug on AIR for iOS, but it was a really bad day when i discovered it.
All my assets were embedded in my application using the traditional [Embed] tag. It was working perfectly until i compiled my mobile application with 8 or 10 locations. The application was compiling correctly, but when launched on my iPad 2, it just crashed silently at startup. After many tries and errors, i’ve found that my application couldn’t handle more than 6 locations bundled into my main SWF file. That was a big problem, as the project count 23 different locations.
In not sure at 100%, but i think the problem was related to the way AIR for iOS handle assets embedded using [Embed]. Even if these assets are not instanciated by your application, AIR needed to allocate enough memory to read my 250 Mb SWF. This is where it was crashing. It needed to much memory to simply start my application, and just crash at startup.
Assets needed to be loaded externally, just like you normally do in a regular website.
LOCATION CONFIGURATION FILES
At that time, i’ve started thinking about a way to configure and setup all locations by using a custom XML structure. Each location would have his own XML configuration file filled with images path, characters positions and settings, etc..
It turns out to be one of the best decisions i’ve take in the project. Once build correctly, this system was powerful and had a lots of flexibility. All assets (textures, spritesheets, xml) were loaded from a CDN. On mobile, assets were loaded from our CDN, then saved on device’s hard drive. Next time requested, assets were loaded from hard drive instead of CDN. This helped loading time so much.
These configuration files also gave me the flexibility to build complex list on actions available when clicking on characters and objects. I will not cover the way it work in the Flash application, but here is the shorten configuration file used for the campground.
Resume
I think the path to success in building interactive experience is iteration, even if this process is tedious and grind you so much energy and time. When i compare the first iteration i’ve made with the one used on fortmcmoney.com, it’s like comparing apples and oranges. The interactions, animations and user experience is so much better, and this is what’s the most important when you put your heart and soul into a project.
The months working on these various iterations have been though. But when thinking about it, i’ve learn so many things during that process. Learning from our mistakes plays a big role in becoming a better developer. I hope the things i’ve learn here will pay in the future.
In the next post of the series, i will talk about my process to automate assets creation.
Thanks for reading
Posted under Fort McMoney, Flash / AIR
Tagged as Fort McMoney, Case Study