With the module finished I can say that i’m a mixture of happy and underwhelmed. Originally I set out to do more in all areas but ended up compromising due to issues with time and deadlines.I loved the module briefs and think they leave a lot for the user to experiment with and achieve awesome results, but it all felt a bit overwhelming at times for me personally within the time frame set. Looking back now, I can say with confidence that I learned a lot, I love modelling and am happy with how my knowledge and approach to the area has developed (looking back at some of my early modelling and thinking “what the hell is that?). However i’m still somewhat clueless to Uv mapping, which annoys me cause I know its an area that really holds me back, however as a result I’m determined to solve this problem, I’ve been looking into Ptex a little and it seems like a really handy way to get some nice maps but I still want to learn how UV map in Maya first and when I know I can do it that way, then i’ll try out Ptex. Animation and compositing have become a personal favorite, and would love to work on a few projects that combine the two. Overall I really liked the module and brief set but felt a bit overwhelmed at times. Thankfully i’m getting better at dealing with this, knowing when something is too ambitious and when something is realistic to achieve, which in the past has been a problem but i’m happy with my outcomes as a starting point, just going to have to keep getting better with practice. To finish off I would like to say that I had an awesome team. Edward,Kerry and Sorcha are incredibly passionate and hard working individuals who are doing to do some awesome things in the future, was really glad I got to work with them and am thank full for all the support and feed back given by both the team and my lecturers.
In addition to modelling, we’ve also been tasked with UV mapping some of our models.
I haven’t had much experience with UV mapping in the past, so I thought it would be necessary to brush up on some of the basic theory revolving around UV mapping.
I found a nice article by Renier Banninga, (found Here) in which he goes over some useful tips and methods to use when UV mapping. In addition Digital Tutors also had some really useful tutorials addressing UV Mapping. (found here).
The Rengier article dives straight into the technical aspects involved in the UV mapping process, whereas in contrast, the Digital Tutors tutorial starts off by explaining some the basic theory and terminology used and then gradually builds into some of the more technical aspects of the UV mapping process. Both are great sources of information regarding the topic at hand and often include visual examples of what they’re trying to explain which I find incredibly useful being somewhat of a visual learner.
What Is UV Mapping?
UV mapping is a solution addressing the problem one may face trying to apply a two dimensional texture onto a model or piece of geometry that exists in a three dimensional space.
UV’s act as a bridge between 2D and 3D, allowing us to apply a 2D image onto a 3D object. Each face on the polygonal object is tied to a face on a UV map.
In reference to the image above: I like to think of a UV Map as a the 3D object stretched out and flattened, making it easier for us to paint on it and this process is often referred to as laying out the UV’s. (UV mapping)
When mapping our UV’s there’s a few things we need to keep in mind:
- UV’s need to be spaced evenly to work well, otherwise this could lead to our texture being distorted when applied to an object.
- Seams, , non-connected, non continuous edges on a piece of geometry, plan where they can be hide on an object.
Mapping Types And Their Uses:
Renier describes planar mapping as the most basic of the mapping modifiers to apply to objects.
It projects the texture onto a model from one direction and is useful for mapping objects like walls and basic terrain. however, isn’t considered to be effective when a complex object with many overlapping surfaces needs to be mapped
The reason for this being that it will often stretch the polygons that don’t face the projected map directly.
-An example of Planar mapping above-
Projects the texture in a radial pattern inwards making it very useful for mapping objects like tree trunks, arms, torso and legs. It’s very handy for blocking out mapping on various types of meshes. However it still requires a lot of tweaking afterwards in the UV editor.
-An example of Cylindrical mapping above-
Projects the texture in a spherical pattern onto an object. However it causes a very high pixel density at the poles of an objects mapping. This causes a pinching effect that’s hard to counter when painting the texture.
-An example of Spherical mapping above-
Pixel Density and Stretching
Try to keep your mapping a consistent aspect ratio for the pixel size in the texture map. Lookout for areas where the texture gets stretched or skewed. This can cause unnecessary problems for the texture artist who would have to counter any warped mapping.
To Minimize Seams when UV mapping, Simply align the vertices of the seam with the corresponding connection in the mapping on either the horizontal or vertical plane of the texture coordinates. This way the pixels align on one of the axes.
For technical objects, it’s easier to get away with seams since they tend to be quite fragmented, and the nature of the object allows it. however for organic meshes.
We can minimize the amount of seams as much as possible by using accurate, continuous mapping.
Banninga, continues by covering some of the more advanced aspects of UV mapping.
Optimizing UV Layouts
Optimized UV layouts are particularly useful for real-time characters.
(He’s basically stating to not waste space in your UV editor, and this is because the entire texture gets loaded into memory, so take advantage of it)
To do so, you should scale, rotate and move those UV-mapped vertices until no more space can be saved.
Unfolding and Relaxing UV’s
Unfolding and relaxing UV’s is a handy thing to do if your UV’s are caught up and tangled.
Inside the Relax UV option box we can edit some values such as Pin selected UVs or Pin Unselected UVs.
Pining either means they wont be affected the the action of the Relax UV’s
Relaxing with even and smooth out some of the irregularities in you’re UV’s.
Lets you unwrap the UV mesh for a polygonal object while trying to ensure that the UVs do not overlap. Unfolding UVs helps to minimize the distortion of texture maps on organic polygon meshes by optimizing the position of the UV coordinates so they more closely reflect the original polygon mesh.
Within the Unfold options we can set restraints, in order to achieve the effect we want, like limiting the unfold to only unfold Uv’s horizontally or vertically.
I decided to make a short video on the process of my compositing project.
It covers the early stages in PFTrack to compositing the final rendered stages within After Effects. Really Enjoyed the project and cant wait to do more and greater projects within Compositing and VFX.
Using the footage that Edward recorded, I imported it into PFTrack,
With Edwards Footage Imported into PFtrack, I added an Image manipulation node and adjusted the contrast,brightness,saturation and gamma of the footage to make it a bit brighter and a little sharper in an attempt to make the footage easier to track.
Footage before: Image Manipulator Node
Footage After: Image Manipulator Node
After the image manipulator was added, I applied an Undistort node in an attempt to remove and blur or unwanted noise from the footage
I then started to track the footage, to begin, I applied an auto track node to the footage as a basic foundation for the tracking footage.
With each tracker, I edited its deformation attributes to pick up on Rotate,Scale and skewing happening within the footage (considering there was a bit of all 3 happening within the footage) hoping to obtain a better track and a failure threshold of 0.700.
Auto tracked footage
After which I overlayed a User Track Node in order to obtain a more accurate track.
User tracked Footage
After the footage was tracked I applied a Camera Solver Node in order for PFtrack to determine what would the best camera animation based of the footage.
And then used a orient scene node to determine the axis orientation and scale of our scene.
In order to make sure the orient and perspective of my track was correct I used the test object Node
Test objects in real time
Finally I exported our tracked PFTrack scene as a Ascii file to continue working with in in Autodesk’s Maya.
With My tracked footage imported to Maya implemented my 3D object into the maya scene and adjusted the tracked camera settings in its attribute editor.
I didn’t alter the camera’s focal length as it keyed and changed the Near Clip value to 0.001 and the Far Clip value to 1000000.
When adjustments to the camera had been made and the 3D object matched with the tracked scene I started to implement lighting, consisting of a few directional lights, since the rubrics cube had a MIA preset material allied to it, the values were quite important. I ended up going for a lighting value of 0.200, this made the material visible with a render but also didn’t produce intense shadows.
After which I assigned my 3D object to it’s own layer and called it beauty
The image plane (which I would use to render out shadow) onto it’s own layer called shadow.
And a layer consisting of all the objects within the scene to obtain an occlusion pass, with a sample rate of about 300.
Thinking I was finished, I rendered out the layers and composited them together however I soon realised that the tracked footage was fine but the shadows were way off.
So, returning to maya I altered some of the lights attributes and positions and this time rendered out single frames and composited them together within Photoshop to ensure everything was perfect and that I was happy with the end result before I hit render.
In this test I was trying working mainly with the shadow layer, but I thought the shadow was too dim for the scene and also strangely sharp around the edges, which confused me.
I didn’t like the results I was getting with the shadows and started looking into alternatives like using the Occlusion layer as my shadow, I preferred It must more to the shadows because of the way it naturally diffuses both on and off the cube. But I felt like it was too much for the scene.
Playing about with the intensity of some occlusion pass attributes. I found a result I was much happier with and created the illusion of shadow I was looking for.
I then rendered out the beauty layer and the Occlusion Layer Separately.
With both my Beauty layer and Occlusion layer rendered out it was time to composite everything together, in this case I used after effects.
As I learned from the tutorial mention previously: (see blog post here)
We can ensure the frame rate of our footage and rendered sequences matches up perfectly by interpreting the footage and applying the same frame rate to all of them. In my case I set everything to 24 frames per second.
With both my footage and rendered sequences interpreted to match I then started the process of compositing everything together within the layer editor.
As seen above I applied a multiply layer effect onto my Occlusion sequences this layer effect overlayed the alpha values of the occlusion pass layers to create the soft shadowed effect seen below.
Started off the rubric’s cube by creating a maya cube and beveled the edges found under edit mesh.
This beveled cube was then duplicated into rows three by three.
This, in addition, was duplicated two times to create the final shape of the cube.
Mia preset materials had been applied previously (glossy plastic,Black) in an attempt to re create the visible attributes in maya.
Colour was then applied over certain faces of the cube to create the varied colour design typically found on rubiks cubes.
Broken Barbie Doll
With my scene and 3D objects perfectly set up and and matched with our tracked footage, it was time to render it out and composite it within an other piece of software such a After Effects or NUKE.
For the time being I decided to use After Effects, where the rendered out footage would be composited using a simple layers within the software.
I came across this youtube tutorial which explains how to render out multiple layers within MAYA to be composited together later in After Effects.
Click on the title below to be taken to the video.
The tutorial covers the entire process of a basic compositing project, from exporting exporting our recorded footage to Maya, to compositing together within After Effects.
Personally I used this tutorial for its process regarding the rendering out layers with different effects (such as Occlusion or shadow) into After Effects.
I have some experience using render layers during my 15 second animation project so the process wasn’t entirely new to me.
See Blog Post:Previous experience using render layers
(Thanks Again Alec!)
To render specific objects in layers within Maya we can select them and on the bottom right of the screen in the channel box and under the render tab we can assign them to a new layer.
Naming conventions that make sense also make the whole process generally easier with regards to organisation.
So for example on the image above:
Beauty layer: Will render out as much of the scene as we want it to
Shadow Layer: Will only render out the shadows of the scene
AO PASS layer: Will only render out the occlusion information of the scene.
To only render out a certain object select everything else in the scene and in the attribute editor under render stats deselect Primary Visibility.
To apply an occlusion or shadow preset to a layer go into into the attribute editor and change its presets based of the options provided.
- Luminance Depth
- Geometry Matte
- Normal Map
If we want to test and see if our shadow layer is working, we can go into our render view and select the button that displays our shadows Alpha levels.
Before we batch render we can ensure which ones are being rendered by observing which layers have a tick or a cross to the left of render layer.
Layers with a tick will be rendered out in the batch render and layer with a cross will not be rendered.
Finally, once we have obtained our rendered out images, we can import them into after Effects as image sequences.
Finally to ensure that all of our rendered sequences match up with our recorded footage, we need to interpret our sequence.
Right click on the imported sequence within after effects, go to interpret footage and then go to main. This will open up a new window in which we can edit the frame rate of our sequence. Use this method to match the frame rate of the recorded footage with the rendered out sequences or vice versa.