Creative Strategies: Project Reflection

With the module finished I can say that i’m a mixture of happy and underwhelmed. Originally I set out to do more in all areas but ended up compromising due to issues with time and deadlines.I loved the module briefs and think they leave a lot for the user to experiment with and achieve awesome results, but it all felt a bit overwhelming at times for me personally within the time frame set. Looking back now, I can say with confidence that  I learned a lot, I love modelling and am happy with how my knowledge and approach to the area has developed (looking back at some of my early modelling and thinking “what the hell is that?). However i’m still somewhat clueless to Uv mapping, which annoys me cause I know its an area that really holds me back, however as a result I’m determined to solve this problem, I’ve been looking into Ptex a little and it seems like a really handy way to get some nice maps but I still want to learn how UV map in Maya first and when I know  I can do it that way, then i’ll try out Ptex. Animation and compositing have become a personal favorite, and would love to work on a few projects that combine the two. Overall I really liked the module and brief set but felt a bit overwhelmed at times. Thankfully i’m getting better at dealing with this, knowing when something is too ambitious and when something is realistic to achieve, which in the past has been a problem but i’m happy with my outcomes as a starting point, just going to have to keep getting better with practice. To finish off I would like to say that I had an awesome team. Edward,Kerry and Sorcha are incredibly passionate and hard working individuals who are doing to do some awesome things in the future, was really glad I got to work with them and am thank full for all the support and feed back given by both the team and my lecturers.

Compositing Process: From PFTrack to After Effects

Using the footage that Edward recorded, I imported it into PFTrack,

Edward’s Footage

With Edwards Footage Imported into PFtrack, I added an Image manipulation node and adjusted the contrast,brightness,saturation and gamma of the footage to make it a bit brighter and a little sharper in an attempt to make the footage easier to track.

Image Manipulator.jpg

Footage before: Image Manipulator Node

Before

Footage After: Image Manipulator Node

After

After the image manipulator was added, I applied an Undistort node in an attempt to remove and blur or unwanted noise from the footage

undistort

I then started to track the footage, to begin, I applied an auto track node to the footage as a basic foundation for the tracking footage.

With each tracker, I edited its deformation attributes to pick up on Rotate,Scale and skewing happening within the footage (considering there was a bit of all 3 happening within the footage) hoping to obtain a better track  and a failure threshold of 0.700.

Auto tracked footage

After which I overlayed a User Track Node in order to obtain a more accurate track.

User tracked Footage

After the footage was tracked I applied a Camera Solver Node in order for PFtrack to determine what would the best camera animation based of the footage.

camera slover.jpg

And then used a orient scene node to determine the axis orientation and scale of our scene.

In order to make sure the orient  and perspective of my track was correct I used the test object Node

Test objects in real time

Finally I exported our tracked PFTrack scene as a Ascii file to continue working with in in Autodesk’s Maya.

With My tracked footage imported to Maya implemented my 3D object into the maya scene and adjusted the tracked camera settings in its attribute editor.

I didn’t alter the camera’s focal length as it keyed and changed the Near Clip value to 0.001 and the Far Clip value to 1000000.

camera_setting

When adjustments to the camera had been made and the 3D object matched with the tracked scene I started to implement lighting, consisting of a few directional lights, since the rubrics cube had a MIA preset material allied to it, the values were quite important. I ended up going for a lighting value of 0.200, this made the material visible with a render but also didn’t produce intense shadows.

light settings

Matched scene:

After which I assigned my 3D object to it’s own layer and called it beauty

The image plane (which I would use to render out shadow) onto it’s own layer called shadow.

And a layer consisting of all the objects within the scene to obtain an occlusion pass, with a sample rate of about 300.

Thinking I was finished, I rendered out the layers and composited them together however I soon realised that the tracked footage was fine but the shadows were way off.

So, returning to maya I altered some of the lights attributes and positions and this time rendered out single frames and composited them together within Photoshop to ensure everything was perfect and that I was happy with the end result before I hit render.

Test One:

Image Test_1

In this test I was trying working mainly with the shadow layer, but I thought the shadow was too dim for the scene and also strangely sharp around the edges, which confused me.

Test Two:

Image test 2

I didn’t like the results I was getting with the shadows and started looking into alternatives like using the Occlusion layer as my shadow, I preferred It must more to the shadows because of the way it naturally diffuses both on and off the cube. But I felt like it was too much for the scene.

Test Three:

Image test 3

Playing about with the intensity of some occlusion pass attributes. I found a result I was much happier with and created the illusion of shadow I was looking for.

I then rendered out the beauty layer and the Occlusion Layer Separately.

With both my Beauty layer and Occlusion layer rendered out it was time to composite everything together, in this case I used after effects.

As I learned from the tutorial mention previously: (see blog post here)

We can ensure the frame rate of our footage and rendered sequences matches up perfectly by interpreting the footage and applying the same frame rate to all of them. In my case I set everything to 24 frames per second.

interpret

With both my footage and rendered sequences interpreted to match I then started the process of  compositing everything together within the layer editor.

after effects layer settings

As seen above I applied a multiply layer effect onto my Occlusion sequences this layer effect overlayed the alpha values of the occlusion pass layers to create the soft shadowed effect seen below.

 

Compositing Research:From Maya To After Effects

With my scene and 3D objects perfectly set up and  and matched with our tracked footage, it was time to render it out and composite it within an other piece of software such a After Effects or NUKE.

For the time being I decided to use After Effects, where the rendered out footage would be composited using a simple layers within the software.

I came across this youtube tutorial which explains how to render out multiple layers within MAYA to be composited together later in After Effects.

Click on the title below to be taken to the video.

[TUTORIAL] 3D model into video

with STEADY CAMERA (with

Maya + After Effects)

 

The tutorial covers the entire process of a basic compositing project, from exporting exporting our recorded footage to Maya, to compositing together within After Effects.

Personally I used this tutorial for its process regarding the rendering out layers with different effects (such as Occlusion or shadow) into After Effects.

I have some experience using render layers during my 15 second animation project so the process wasn’t entirely new to me.

See Blog Post:Previous experience using render layers

(Thanks Again Alec!)

12208806_618385658303329_5995500549017902610_n

To render specific objects in layers within Maya we can select them and on the bottom right of the screen in the channel box and under the render tab we can assign them to a new layer.

renderlayers

Naming conventions that make sense also make the whole process generally easier with regards to organisation.

So for example on the image above:

Beauty layer: Will render out as much of the scene as we want it to

Shadow Layer: Will only render out the shadows of the scene

AO PASS layer: Will only render out the occlusion information of the scene.

To only render out a certain object select everything else in the scene and  in the attribute editor under render stats deselect Primary Visibility.

render_visibility.jpg

To apply an occlusion or shadow preset to a layer go into into the attribute editor and change its presets based of the options provided.

  • Luminance Depth
  • Geometry Matte
  • Diffuse
  • Specular
  • Shadow
  • Occlusion
  • Normal Map

If we want to test and see if our shadow layer is working, we can go into our render view and select the button that displays our shadows Alpha levels.

Image Alpha's

Before we batch render we can ensure which ones are being rendered by observing which layers have a tick or a cross to the left of render layer.

renderlayer tab.jpg

Layers with a tick will be rendered out in the batch render and layer with a cross will not be rendered.

Finally, once we have obtained our rendered out images, we can import them into after Effects as image sequences.

Finally to ensure that all of our rendered sequences match up with our recorded footage, we need to interpret our sequence.

Right click on the imported sequence within after effects, go to interpret footage and then go to main. This will open up a new window in which we can edit the frame rate of our sequence. Use this method to match the frame rate of the recorded footage with the rendered out sequences or vice versa.

Compositing: Project Update

So with multiple projects going Edward and Myself haven’t had many changes to come together and develop the car idea as much as we’d initially hoped for.

Also Blayne joined our team! Really buzzed that he wanted to work with us!

So in the end we’ve decided to simple composite a 3D object into a live action scene, our Lecture Alec suggested this concept earlier on as a good exercise to help you break into compositing, so we’re ran with that.

I think it’s a good direction to take our project because its a simple concept but can be tricky to pull off and make believable. Where we would want our composited object to fit in with our footage as much as possible, looking into proper perspective and lighting to make it realistic. Not our initial plans but plans change.

we may not be working on a project where one thing leads to another, but we can still help each other out with advise and feedback which is just as good!

Compositing: An introduction to PFTrack

For our compositing assignment Alec recommended that we use or at least look into using PFtrack from Pixel Farm.

Considering I’ve never used the software before I decided to look up some tutorials recommended for beginners of the matchmoving and came across this really nice series of lessons from Digital Tutors: Your first Day In PFtrack

2e991bcc7f1f8226f5e36d1e57d3ac76

First Three lessons cover the interface of Pftrack: Such as setting up our project files, setting up preferences suited to your hardware specifications and understanding the node base Tree system

Summarised notes:

interface1.jpg

Track save automatically to a specified directory on your PC however you cannot work within PFtrack without a valid project.

Tree View: Allows you to build node based trees with our pipeline.

Canvas: Displays our project and workflow on screen such as showing the trackers in our scene.

  • Displays our 3D space with our solved 2D information

Project Window: Allows us to set up our project.

Navigator: Allows us to navigate to ours file paths, from which we can import our footage.

If we click into the media administrator we see the interface changes slightly. As seen below:

interface2

We still have our Tree view where we can insert our nodes, however as you can see we are now presented with two new windows with the interface.

  • The Media Bins and the Navigator

Navigator: Allows us to navigate through our systems file directories in order to obtain the desired file to be imported into our project.

Media Bin: Where we can insert the footage from the navigator that we require for our project to be edited within PFtrack.

Setting up our  Program Preferences

Within PFtrack, we can edit our preferences by going into the little ‘GEAR’ Icon in the top right hand side of the screen, as shown below:

preferences.jpg

And it within preferences that we can set up particular parts of PFtrack to better suit the specs of our PC. Such as Cache size, Tracker options,Export settings and even the scene units, where we can set up the scene within our project to match the units of our imported footage.

I modified my cache size by increasing the limit from 2GB to 6GB giving me more space to work with.

With our footage loaded in PFtrack we can start to edit it in the media admin menu. 

Here we can de-interlace our footage (if the footage is interlaced) and modify the camera pre-sets PFtrack has automatically given us.

Modifying you’re RAW footage in this way can be beneficial as it is capable of ‘cleaning’ your footage within PFTrack.

Whenever you have imported the footage required for your project, I a good Idea to cache the footage with your scene.

Cache_current_clip.jpg

You can do this by  selecting the small ‘C’ symbol located nest to the timeline on the lower left of the screen.

Doing this will load the footage information, (frame rate,etc) into the cache memory of PFTrack. As a result the percentage value of the cache used will increase. Regardless, caching your footage will allowing it to play smoothly within PFtrack without any buffering or delays taking place.

Personal Note

when recording footage, try not get a lot of lens distortion or blur in it.

When it comes to tracking the scene it may cause difficulties, because trackers have an easier time linking onto defined and contrasting details within the footage. Distortion and blur affects this greatly.

However we can fix any distortion or blur by creating an ‘Image Manipulator Node’ and applying it to our footage within our tree view.

Image Manipulator.jpg

Settings available through the Image Manipulator Node

Its in the Image Manipulator Node that we can minimise the effects of distortion on our footage, by editing  features such as the pixel density and sharpness, gamma, colour channels, maker enhancements and contrast.

I particularly like this feature because it allows you to detect which colour channel is causing the most distortion within the footage, which means you can make specific adjustments to only that colour channel as opposed to the footage entirely which could cause problems later on.

In addition to the Image Manipulator Node you can also use the Undistort Node to help clean up your footage.

The undistort Node allows us to lay out lines that defines to the software where certain lines within the footage are currently at and where they should be.

Tracking

Trackers enable use to supply our PC’s with enough X and Y axis information allowing it the figure out the 3D scene within our footage.

When tracking we have two options, Auto track or user track.

Despite being different nodes within PFTrack, both allow us to edit the parameters within the node that defines how fast or accurate we want to track to be.

So for example, under ‘Search mode’ when can change the setting from Best Speed to Best Accuracy, making this change would mean our track would take longer but we would get a more accurate result in the end. If our footage contains a lot of camera based rotation,scale or skew changes we can enable these deformers under the Nodes settings. we can also increase or decrease the failure threshold of the track, meaning if any trackers go beyond the failure threshold, they will fail. This is useful as it results in a more accurate track in the end.

Automatic Tracking

The software analyses the footage provided and determines on its own accord where trackers should be placed based on contrasting details.

User Tracking

Allows the user to determine precisely where a tracker is placed within the footage based on their own preference.

In addition to using user tracks, we are provided with a really helpful feature called the track Window.

track window

The track window shows us a close up image in real-time of where the tracker will exist within the footage. This can be used for making more precise and specific tracker placements.

Key Term: A soft track doesn’t stay true to the footage its tracking, we can clean this in our footage by trimming the error information provided with the track’s information. (seen below)

trim.jpg

Camera Solver

The camera solver node will produce the most accurate camera animation within PFTrack based off the 3D scene information along with the most accurate solution of the 3D space obtained from the footage.

Within the Camera Solver node we can smooth out the movement of the camera by changing its Translation and rotation attributes to Med, Smooth.

We then click the Solve all button located to the bottom right of the Camera Solver attribute.

Orientating The Scene

Orienting the scene allows the us to determine the axis orientation and scale of the scene. By adjusting its rotation,translate or scale attributes.

To orientate a scene we select Marquee under the Orient scene node attributes, and with a tracker selected, hit Set origin   (highlighted below in the yellow)

marquee_set origin.jpg

This causes our scene to orientate itself around the selected tracker.

When adjusting the orientation of the scene, try to line up the horizon line of the orient scene node to the horizon line of the footage.

Exporting Our Scene

With our footage altered and tracked to meet our specific requirements we can export our footage say that it may be used in other software applications such a MAYA or NUKE.

To do this we simply add and Export Node to our footage in the tree editor.

Here we can choose from a variety of different file formats ranging from Afters Effects files to NUKE python files.

In my personal case I’d export the footage as a Autodesk MAYA 2011 (Ascii) file.

Setting Up Our MAYA scene:

Before we start working with our tracked footage in Maya, we need to ensure that the workspace units of our maya scene matches the workspace units of our PFtrack project, for example if  the  units within our workspace in PFtrack  were set to Metres then we should also change the  the  units of our Maya scene to metres also. This will ensure that both our tracked information matches the scale of our maya scene.

sceneunits

Matching Scene Units

We may also want to change our camera settings within maya to match the setting of our tracked camera.

Alec gave us some good advice regarding how to go about  setting up or adjusting the focal length of of tracked camera within Maya.

“Setup a camera with the same focal length (check the camera make for the multiplier to use. E.g. 35mm on a cannon 600D has a multiplier of 1.6 so actual focal length is 35 x 1.6 = 56mm”

Cheers Alec!

Editing the Near Clip and Far clip

Near Clip: If we have an object that’s distance value is lower than the near clip’s value then the camera will be blind to it

Far Clip: If we have an object that’s distance value is larger than the far clips value the camera will either clip it or not see it.

The values you set for each depends on the scale of the geometry with the maya scene.

Really enjoyed watching the tutorials provided by Digital Tutors, personally, as a beginner to the compositing process I thought each stage was explained really well, taking us through the basics of the interface to the slightly more complicated tasks of creating a decent piece of tracked footage. thinks its a good place to start for the team when compositing their footage.

Can’t wait to start!

Compositing: The History Of Nuke

Our lecturer ,Alec Parkin, sent me this really interesting post from ‘The Foundry’s’ blog section summarising the history of the powerful Compositing Software NUKE with an interview from original NUKE author Bill Spitzak and Simon Robinson, co-founder of The Foundry.

The article starts off by giving a brief history into both Spitzak and Robinson, from their educational backgrounds to how they got to where they are today.

Bill Spitzah: Summary

  • A graduate of both the Computer Science program at MIT and USC film school,  with several years of software development experience.
  • Broke into the CG industry in the 90’s,where he worked with DD, the creative team that used a command-line script-based compositor to handle the donkey work, alongside their expensive and fixed resolution Flame and Inferno systems.
  • As a result he started to develop a visual, node-based version of the system, and NUKE as we know it today was created.

Simon Robinson: Summary

  • In England, Simon Robinson and Bruno Nicoletti were forming The Foundry, putting their  passion of post-production and visual effects into creating plug-ins for Flame and Inferno.

To think that both these individuals had the opportunity to contribute, develop and mould (heavily I might add) the early stages of CG film into the Goliath it is today is nothing short of awesome.

Over the next 5 years The Foundry and Digital Domain grew as the industry matured, and special visual effects became increasingly essential to the success of films at the box office.
Where in 2002, NUKE was honoured with an Academy Award for technical achievement.
infographic_1993-2007_1
In 2007, The Foundry took over development of NUKE from Digital Domain. Where The Foundry were looking for a software platform of their own, having reached the technical limits of what they could achieve purely through plug-ins.
Over the next few years, NUKE improved in leaps and bounds, as The Foundry added hundreds of new features,including a built-in camera tracker, de-noise, deep compositing and stereo tools—and extended its core with Python, Qt, 64-bit and multi-platform support and soon became the standard fixture in film pipelines around the world.
infographic_2008-2010_1
In 2010, Jon Wadelton became NUKE’s product manager previously working as lead Software engineer from 2007. In the same year, NUKE expanded its range to include NUKEX, which combined the core functionality of NUKE with an out-of-the-box toolkit of exclusive features; many of these drew on The Foundry’s core image-processing expertise which had proven so valuable in the plug-in market, including The Foundry’s own Academy Award winner, FURNACE.
infographic_2011-1012
Under Jon’s guidance, NUKE  continued to expand, with the addition in 2012 of HIERO, HIEROPLAYER and NUKE Assist.  In 2014, in tandem with the NUKE 9 release, The Foundry introduced NUKE STUDIO: a collaborative VFX, editorial and finishing solution which sits at the top of the NUKE range.
infographic_2013-2016

I really cool read and nice bit of history on the development and evolution of NUKE and its promising future. Better start getting comfortable with it as soon as possible! Thanks again Alec!