Projects

3
Apr 12

The new Audi A3 webspecial with webcam gesture control

Audi A3 webspecial

In March 2012 Audi released a new version of its extremely successful Audi A3. The brief for Razorfish was to motivate potential clients to discover the A3s interiour. And to apply simplicity and intuitiveness of the A3s interiour design to the internet. So the concept team came up with the idea to  create a web special that could be fully experienced via gestures.

But how to realize that? Full-scale webcam-based controls for websites have not yet left the realm of experimental microsites. And in contrast to the Kinect platform by Microsoft, Flash does not provide a software framework to recognize gestures via the webcam. Kinect is also equipped with multiple cameras and a depth sensor that precisely captures a persons’ spatial movements.

The aim was to create an experience that is equally interesting when controlled by mouse as well as by the hands throught the webcam. So the gestures would be based on the movement of the mouse cursor. Consequently, we had to transform the hand movements captured by the camera into a hand cursor.

The hand becomes the cursor

Research on the experiments of creative developers on user interaction with a webcam in flash resulted in three approaches that could be suitable for our untertaking.

1st approach: Object Tracking

Object Tracking example The OpenCV (open computer vision) open source framework for C / C++ provides various algorithms for image processing. Some of them concerned with object detection within images have been partly ported to Flash by the Japanese Ohtsuka Masakazu in 2008. Though this technology is extreamly promising, we had soon to realize the code base available in Flash was too limited to support commercial development in time or budget.

2nd approach: Color Tracking

Colour Tracking exampleWith a colour filter it is possible to determine the location of a person’s hands within the image based on his skin colour. We had good results at first. Of course there was the need to filter out the persons head and also to dynamically handle different persons’ skin colours. That could be resolved by leveraging object tracking to determine the face and grab the person’s skin colour from it. But the approach was still instable with backgrounds that resembled the person’s skin colour or changing light conditions.

3rd approach: Motion Tracking

Motion Tracking example

This method compares successive camera images and determines the areas of alteration, which represent potential areas of movement. In the first prototype we implemented the rather simple swipe gesture. While the approach worked fine, was quite stable in different light as well as background conditions and performant, it was yet too imprecise and jumpy to support more complex gestures.

The solution

Advanced Motion TrackingBecause the first approach was not feasable and the second for its own insufficient, the way to go was to tweak the motion detection and maybe combine it with the others. Not an easy task. The enhancements which evolved over time by gaining a deeper understanding may be classified into detection, interpolation and performance optimization. Here are some examples:


Detection

  • Merging the results of serveral motion detections (3 were optimal) added a lot of stability and accuracy. This handles e.g. flicker of a light bulb (not visible to a person but to a camera!) or duplicate camera frames (framerate drops with lower light due to increased exposure time).
  • Two motion detection loops with different settings, the first optimized to handle faster movement and the second optimized to still capture very slow movement.

Interpolaton

  • In what we call triangular interpolation the last three detected coordinates are taken to calulate the triangular balance point as the result point. This smoothes the detected movement a lot and eases undesired jiggle.
  • With a bezier curve interpolation the last 10 result points are taken as control points to calculate a bezier curve. The new result point is set at 70 percent on that curve. While this adds a little delay, it further smoothes the movement.

Performance

  • Because the detection loop runs on every frame, it is crucial that it uses minimal computational resources so it does not impede other parts of the application like playing smooth animations.
  • Furthermore, the emergence of garbage collection which is accompanied by frame drops and a collapse of detection quality should be mininized.
  • The best method to handle both is the consequent deployment of object pools. The reuse of a BitmapData object e.g. is 10x faster than creating a new one!

In the end, the system became so accurate that we renounced the need to combine it with the other approaches (and deal with their disadvantages). The job was done!

The cursor becomes a gesture

This was the comparatively easy part. Simple movement based gestures like swiping are recognized by moving the cursor in a specific direction or over an appointed shape. The hold gesture – the equivalent to the mouse click – is triggered by holding the cursor for a while over a determined shape.

Form based gestures are recognized by comparison with a previously captured form. In case of the rotation gesture this is a circle form. To reduce false detections, we also applied some noise filters. The comparison is based on the 1$ gesture recognizer algorithm, a good AS3 implementation can be found at http://www.betriebsraum.de/.

 


7
Sep 11

TOUCH N CLASH / Awardee of the Adobe Mobile Challenge

At the 6th July Adobe unveiled the European Challenge in order to promote the new mobile development features of the Flash/AIR platform. The mission was to get approval by the Adobe Jury and to publish the application on the Android marketplace, the Apple Store and the BlackBerry AppWorld before the 1st of September.

Yesterday was the decision and luckily TOUCH N CLASH won the Novelty/Innovation Price! Read what the jury said on http://www.adobemobilechallenge.com/winnersbeta/

With TNC we successfully demonstrated that it is possible to develop a crossplatform multiplayer game without any kind of server for the three platforms named iOS, Android and PlayBook.

What is TOUCH N CLASH?

TOUCH N CLASH is a multiplayer game for two up to four players.
The goal is to get the other players out of the game by using a gameball which works like a bomb. The last player in the game wins.
You will need Wi-Fi internet connection to find other players. All players have to be connected to the same Wi-Fi network.

Gameplay

In the game the colored sides of your gamefield represents the other players.
If a gameball appears in your gamefield, you have to pass it to an other player bevore the countdowm time runs out.
To pass a gameball to an other player, touch a gameball and drag it to a colored side of your gamefield.
Sometimes you have the possibility to get an additional gameball.
A transparent gameball will appear in the middle of your gamefield, touch it to activate it.
Keep your gamefield clean of gameballs.
Good luck.

Insights

TNC was developed using Flash Builder 4.5.1. It is using skinned components of the mobile spark skin and the ViewNavigator class for switching between the views.

Communication

For the crossplatform communication we used the Real Time Media Flow Protocol (RTMFP) which was introduced in the Flash Player Version 10. We implemented a local P2P Neighbor lookup to connect devices inside the same WiFi to the overall lobby. All devices together create a P2P mesh which makes it possible to transfer data between ths clients. Each client has the possibility to open a new game the other clients can join. When a client opens a game he switches into a kind of “control-mode” to tell the game players how to behave and what to do.

Gamefield

For the phyisc based game field we used the Box2D engine. We connected the acceleration sensors of the devices to the gravity of the Box2D engine so the players can influence the course of the gameballs by rotating their device.

Performance

Performance was one of our critical points because the game only makes fun when it is running at a good framerate. Choosing the Flex Components as a base for the app was not the best choice at all, but when we ran into the performance issues a switch back to pure AS3 was impossible due to the deadline of the submission.

What we did to ensure a good performing app on all devices:

  • Removing all Spark images where possible and replacing them by Spark BitmapImages
  • Replacing any vector based assets by Bitmaps
  • Optimized the render interval of the Box2D based gamefield
  • Optimized the accuracy (velocity iterations, position iterations) of Box2D to achieve a good ratio between performance and functionality
  • reduced the overall framerate to about 30 fps
  • moving time and CPU consuming processes away from transition processes to ensure that the user has a smooth experience by navigating through the app

We also experimented with CPU vs. GPU to achieve the maximum performance available. The result: we stayed on the CPU. Some devices like the HTC Desire or Desire HD performed better in the GPU mode. On the Apple IPad 1 the game was slower on GPU compared to the CPU. On the iPad 2 we could not really identify a difference.

In general the performance on these devices was awesome:

  • iPad 2
  • Samsung Galaxy S2
  • BlackBerry PlayBook

Problems

Because we do not use any kind of server (not locally and not in the internet) we were totally dependent on the features of the WiFi hotspot. Some WiFis do not have the client to client communication enabled. As other WiFi only multiplayer games TNC will not work in this kind of networks. Because RTMFP uses broadcast technologies to discover the clients this can also be a pitfall.

We are currently investigating some problems in PlayBook only WiFis. Sounds insane but with our current settings the PlayBooks will not be able to connect to another PlayBook until another device type like PC, Mac, Android or iPhone/iPad joins the NetGroup. The other devices will then enable the lookup even between the PlayBooks.

Links

TOUCH N CLASH for iOS

TOUCH N CLASH for Android

TOUCH N CLASH for BlackBerry PlayBook

Winners of the Adobe Mobile Challenge

Screenshots

Credits

Game Director & Game Design & Development
Tobias Richter, Kay Wiegand

Design
Felix Moeckel

 


3
May 11

Realtime ribbon: the new Audi Q3 webspecial

Although it’s kinda hard to tell – all the transitions and stages in our latest webspecial for the Audi Q3 are rendered in realtime 3D (which sounds easier than it was).

In this article we’ll give a few insights in how we built it, using Away3D 3.6.0 in combination with Collada files and textures exported from Cinema4D.

Before the real production started we did several tests on how to build an endless ribbon. The ribbon should display the loading process and had to form the stages for the actual stage content.

The core of the solution is quite easy: A sequencer which connects several imported meshes combined with a texture animation and a camera animation.

The sequencer

Because the shape of a full dynamic ribbon would be hard to handle in realtime we used a set of meshes which were modeled in Cinema4D. The first and the last face of each sequence has to match the specific face of the previous/next sequence.
This was mostly done in the authoring tool (Cinema4D). To achieve a seamless look the corresponding points of the faces are docked in realtime inside the Flash application.
For each stage a special short connector sequence is available, to help overcome the difference in face width between ribbon and stage.
For the travel between the two different routes of the plot (bright and dark side) specialized sequences are used.
All in all we use seven stages, eight connector sequences (the intro stage needs two), four ribbon sequences and one change sequence.

Let the ribbon flow

The animation of the ribbon is based on a simple but brilliant idea from Richard Olsson: a growing bitmap material. Therefore the UV-texture coordinates of the meshes have to be unwrapped in a special way.
The texture parts of the mesh have to be aligned in the order of the faces from top to bottom. To animate the growth of the ribbon you “just” have to unmask the bitmap from top to bottom in a modified Away3D BitmapMaterial.

In this basic variant the relative height of a UV-Segment can be more or less different to the height of a mesh segment. The result is a not linear animated ribbon. Because we had to apply easings to the animations we had to linearize the texture animation.
In the first step we developed a small tool which sorts the mesh faces from top to bottom. This can be done very easily because in this case two neighbor faces will always share two vertices.
The ordered faces are used to calculate a path which lies exactly in the middle of the ribbon (yellow dots and yellow lines). With this data the height of every mesh segment can be determined. When this data is set in relationship to the height of the depending UV-segments a very smooth and nearly linear animation of the texture can be achieved.

Camera path animation

For the camera animation a similar technique is used. To achieve a smooth movement the centers of the mesh segments defines the camera target path. Because the normal path animator of Away3D does not consider the length of each segment we had to write our own path animator class. It is quite simple: due to the knowledge of the total path length and the length of each segment we can do a linear animation along the path.

To achieve a smooth camera animation on the path and on the way to the final camera position on the stage we used two “magnetic” camera targets. The red one follows the generated path on the ribbon. The green one is used for the transition from the ribbon animation to the final stage position.

Each sphere has a force to the real camera target (visualized by a trident in the picture) which is enforced or weaked due to the situation. At the end of the animation all targets are at the exactly same (predefined position) to assure a seamless look in combination with the other 2D elements.

That’s just a small impression into what kind of problems had to be solved for this webspecial. In the end the performance was better than could be expected and the result is a convincing combination of 3D and 2D elements. At least when taking into account that the whole implementation phase lasted less than 6 weeks – including localization and Q&A!


18
Mar 11

When touch just isn’t enough

Always wondered how to control your Parrot AR.Drone in a more exact and stable way? Maybe even over longer distances than the integrated WIFI can provide? Well, look no further – Andreas just set up the Ardudrone project on Googlecode (http://code.google.com/p/ardudrone/).

“This project contains all required binaries and scripts as well as the source code for modifying the Parrot AR.Drone to be controlled by a standard RC Remote Control. The RC receiver is connected directly to the drone and sends the commands directly with the help of an Arduino board. ”


18
Sep 10

Audi A1 – one hell of a “microsite”

For the record: Audi A1 was the biggest microsite we ever did! Work started as soon as spring 2009 with the final version being released in spring 2010 (which is quite a short timespan for such an ambitioned project).

Just for the sake being – each content really was a major “wow factor” (at least for a geek). Here comes the shortlist:

Explore


This module might just be your standard threesixty. But: we did dynamic typo synchronized to the camera movement and with country-specific embedded fonts. No easy feat :)

Also a first-timer: we used MP4 as the overall video-format (when no alpha was required) with the industry-standard for subtitling – DFXP. Gone are the times of fiddling with timestamps and texts, at least for the developers. We did use a common movie subtitling/synchronization company which can deliver DFXP. Thanks to OSMF (which we did use from alpha status on).

Style advisor


Great module where the user could choose a personal style of car depending on a set of polaroids. The tricky part here? The alpha video had to be matched to the movements of the polaroids. Also all the logik for assigning cars is completely customizable for each market this site is rolled out for.

The movie


This module started as a plain chapter video player. But after a short while it became clear, that we had to give the user at least two spoken language versions and 12 languages for subtitles.

And the biggest feature was yet to come: from the ending-point of each chapter a transition was to be implemented to an interactive 360° of the the last frame. In this panoramic view hidden goodies could be discovered and won. We did learn a lot about projection, stitching and MP4 compression settings in this part of the site. And did i mention the movies had Justin in them? ;)

Customizer


This module might be the most fun from a user perspective. The challenge here was to get all the layering and logical implications implemented correctly. And also for QA to get this tested ;) Circular UI elements are by the way an interesting exercise for any self-respecting developer …


The gallery gained at least 60.000 submissions over two runs, which could be voted upon – leading to one of the best-voted design to win a real A1. Also each of this customized design could be posted on facebook.


Each design was saved in full-size directly from flash to the backend. Also the user could download his custom wallpapers in the correct size. This required image-creation and -compression in the frontend – easy only at the first moment. In the end we did go for an optimized asynchronous solution – accelerated by custom native code (alchemy).

Community hub


This might look just like some RSS fetched into the site. But: we did build a custom backend for this, which crawls Facebook, Twitter, Flickr, Google Blogs and Youtube for the search term “Audi A1″ and keeps track of what changes are going on in the results. This felt a bit like building google sometimes. But performed far better than expected.

Registrations


Might seem boring, eh? But is easily the biggest accomplishment of the site. We don’t save any data in our own backends nor do we define registration form layouts/field definitions for all the country specific versions. Instead we are talking to some clever webservices hosted directly in front of the Audi CRM, which deliver form definitions, userdata and also do all the necessary validation, storage and single-sign on. So in the end we had one cool form-rendering-engine, which was easy skinned by using ASwing.

The features you don’t see

  • 146 language versions
  • more than 2.5 GB assets, 3-4 hours of video
  • tons of language files
  • more than 180.000 lines of code
  • more than 1.500 jira issues
  • more than 14 releases until now, with up to 3 releases being in parallel development
  • a completely modular structure
  • build mechanisms using ANT and continuous integration (hudson) with automatic generation of config-files via XSLT and validation of each XML in the project
  • we did in fact bottom out the flex compiler at some points, you can do only so much by putting more ram into a machine
  • at least 20 moments of enlightenment

I’m not sure i would again want to do such a big microsite in the near future. But we learned so much in the process, that this was not only a success for the client, but also for us.

Thanks to the team (Aaron, Alex, Anja, Christian, Eva, Frank & Frank, Martin, Peyman, Philipp, Thien)- you rocked hard!