Hao Li has been an assistant professor of Computer Science at USC since 2013 and works at the intersection of computer graphics and vision. His algorithms on dynamic shape reconstruction, non-rigid registration, and human digitization are widely deployed in the industry ranging from leading VFX studios to medical imaging companies. As a research lead at Industrial Light & Magic, he developed the next generation real-time facial performance capture technologies for virtual production and visual effects. With Artec Group, he also created a 3D scanning software called shapify.me which allows anyone to create their own 3D printed figurine from home using a Kinect. Hao also spent a year as a postdoc at Columbia and Princeton Universities in 2011 after receiving his PhD from ETH Zurich in 2010. He was a visiting professor at Weta Digital in 2014 and visiting researcher at EPFL in 2010, Industrial Light & Magic (Lucasfilm) in 2009, Stanford University in 2008, National University of Singapore in 2006, and ENSIMAG in 2003. He was named one of the world’s top 35 innovators under 35 by MIT Technology Review in 2013.
The age of social media and immersive technologies has created a growing need for processing detailed visual representations of ourselves.
With recent advancements in graphics, we can now generate highly realistic digital characters for games, movies, and virtual reality. However, creating compelling digital content is still associated with a complex and manual workflow. While cutting-edge computer vision algorithms can detect and recognize humans reliably, obtaining functional digital models and their animations automatically still remains beyond reach. Such models are not only visually pleasing but would also bring semantical structure into the captured data, enabling new possibilities such as intuitive data manipulation and machine perception. With the democratization of 3D sensors, many difficult vision problems can be turned into geometric ones, where effective data-driven solutions exist. My research aims at pushing the boundaries of data-driven digitization of humans and developing frameworks that are accessible to anyone. Such system should be fully unobtrusive and operate in fully unconstrained environments. With these goals in mind, I will showcase several highlights of our current research efforts from dynamic shape reconstruction, human body scanning, facial capture, and the digitization of human hair. By the end of this decade, our homes will be equipped with 3D sensors that digitally monitor our actions, habits, and health. These advances will help machines understand our appearances and movements, revolutionizing the way we interact with computers, and developing new forms of live communication through compelling virtual avatars.
Engine Co. 4
Evan Hirsch, Managing Partner of Engine Co. 4, is a creative executive with 25 years’ experience working with industry leaders throughout the Americas, Europe and Asia. In 2011, Hirsch founded Engine Co. 4 to provide strategic consulting services on developing immersive multi-platform user experiences, creative development, and providing tactical firefighting advice for large design and creative projects. Clients include; Ubisoft, DeNA, The American Medical Association, The Walt Disney Company and the University of Southern California’s Institute for Creative Technology.
Prior to starting, Engine Co. 4, as the Creative Director at Microsoft’s Live Labs and Surface teams, he played a lead role in defining the user experience for Surface, the first widely manufactured and distributed multi-touch computer. Hirsch has held roles in the Visual Effects and Feature Animation industries in London and worked at Electronic Arts for six years in a variety of roles culminating as Head of Visual Development for EA Worldwide Studios.
Hirsch’s roots are as an Industrial Designer, beginning his career designing consumer products and corporate identities, including hispatented Centrum Vitamin packages that have been in production since 1994. He is a Visiting Scholar at Carnegie Mellon’s Entertainment Technology Center (Pittsburgh, PA) and a Lecturer at Otis College of Art and Design (Los Angeles, CA). Hirsch is a member of the British Academy of Film and Television Arts and is currently serving on the Executive Committee of ACM SIGGRAPH as a Director-at-Large.
States and Transitions: A Look at the Games Industry Today
Throughout recessions and technological shifts, for over 25 years, it seemed the games industry could not stop printing money. Then came a perfect storm of The Great Recession, everyone has game console in their pocket! while development costs started to rival most motion picture budgets. While “Transmedia” and “Convergence” have promised riches to everyone, the reality is that over 100 game development studios have disappeared since 2008 and the gold rush in mobile games could be a mirage. This talk will provide you with a clear picture of where the interactive business is, who is playing and who is paying, why new platforms and business models provide hope and loathing and why making games is so very different from making film/TV. Most importantly, we will make a few uneducated guesses at to where the bright spots are in the next 12-24 months.
Javier von der Pahlen
Javier von der Pahlen is Director or R&D at Activision Central Studios, leading a runtime photoreal character program since 2009. In collaboration with ICT in 2103, Activision R&D introduced the digital Ira project which could be considered a milestone towards photorealism in games. Javier started working on computer graphics in the Architecture program at Cornell University in the late 80s. Before joining Activision he co-created Softimage Face Robot in 2005, the first face commercially available facial animation software.
Beyond demo: The challenge of delivering photo real characters in games from captured pixel to rendered pixel.
Erwin Coumans is creator of the Bullet physics engine and roboticist at Google where he is responsible for real-time physics simulation research and development. His work is used by game companies such as Disney Interactive Studios and Rockstar Games and film studios such as Sony Pictures Imageworks and Dreamworks Animation. After his study of computer science at Eindhoven University in the Netherlands, he has been involved in collision detection and physics simulation research for Guerrilla Games in the Netherlands, Havok in Ireland, Sony Computer Entertainment US R&D and AMD in California. Erwin is a regular speaker at the Game Developer Conference, SIGGRAPH and other conferences and is co-author of the book Multithreading for Visual Effects.
Running an open source project used in games, movie and robotics:
The ongoing pursuit of increasing performance and quality in collision detection and rigid body simulation.
University of California at Berkeley
James F. O’Brien is a Professor of Computer Science at the University of California, Berkeley. His primary area of interest is Computer Animation, with an emphasis on generating realistic motion using physically based simulation and motion capture techniques. He has authored numerous papers on these topics.
In addition to his research pursuits, Prof. O’Brien has worked with several game companies on integrating advanced simulation physics into game engines, and his methods for destruction modeling have been used in more than 58 feature films. He received his doctorate from the Georgia Institute of Technology in 2000, the same year he joined the Faculty at U.C. Berkeley. Professor O’Brien is a Sloan Fellow and ACM Distinguished Scientist, Technology Review selected him as one of their TR-100, and he has been awarded research grants from the Okawa and Hellman Foundations. He is currently serving as ACM SIGGRAPH Director at Large.
Image and Video Forensics through Content Analysis
Advances in computational photography, computer vision, and computer graphics allow for the creation of visually compelling photographic forgeries. Forged images have appeared in tabloid magazines, main-stream media outlets, political attacks, scientific journals, and the hoaxes that land in our email in-boxes. These doctored photographs are appearing with growing frequency and sophistication, and even experts cannot rely on visual inspection to distinguish authentic images from forgeries.
Techniques in image forensics operate on the assumption that photo-tampering will disturb some statistical or geometric property of an image. In a well-executed forgery these disturbances will either be perceptibly insignificant, or they may be noticeable but subjectively plausible. Methods for forensic analysis provide a means to detect and quantify specific types of tampering. To the extent that these perturbations can be quantified and detected, they can be used to objectively invalidate a photo.
This talk will focus on forensic methods based on geometric content analysis. These methods work by finding inconsistencies in the geometric relationships among objects depicted in a photograph. The geometric relationships in the 2D image correspond to the projection of the relations that exist in the 3D scene. If a scene is known to contain a given relationship but the projected relation does not hold in the photograph, then one may conclude that the photograph is not a true projective image of the scene. The goal is to build a set of hard constraints that must be satisfied or else the image must be fake.
Institute of Creative Technologies
University of Southern California
Paul Debevec is a Research Professor in the University of Southern California’s Viterbi School of Engineering and the Chief Visual Officer at USC’s Institute for Creative Technologies where he leads the Graphics Laboratory. Since his 1996 UC Berkeley Ph.D. Thesis, Paul has helped develop data‐driven techniques for photorealistic computer graphics including image‐based modeling and rendering, high dynamic range imaging, image‐based lighting, appearance capture, and 3D displays. His short films, including The Campanile Movie, Rendering with Natural Light and Fiat Lux provided early examples of the virtual cinematography and HDR lighting techniques seen in The Matrix trilogy and have become standard practice in visual effects.
Debevec’s Light Stage systems for photoreal facial scanning have contributed to groundbreaking digital character work in movies such as Spider‐Man 2, Superman Returns, The Curious Case of Benjamin Button, Avatar, The Avengers, Oblivion, Gravity, and Maleficent and earned him and his colleagues a 2010 Scientific and Engineering Award from the Academy of Motion Picture Arts and Sciences (AMPAS).
Debevec is an IEEE Senior Member and Co-Chair of the Academy of Motion Picture Arts and Sciences (AMPAS) Science and Technology Council. He is also a member of the Visual Effects Society and ACM SIGGRAPH. He served on the Executive Committee and as Vice-President of ACM SIGGRAPH, chaired the SIGGRAPH 2007 Computer Animation Festival and co-chaired Pacific Graphics 2006 and the 2002 Eurographics Workshop on Rendering.
Acquiring the Reflectance and Dynamics of Human Skin