Digital Tutors: Introduction to nCloth in Maya

I went through the ‘Introduction to nCloth‘ tutorial today on Digital Tutors. It seems like it will come in useful for animating the floaty quality that we were thinking of for our character.

Towel simulation:

Flag simulation:

Dress simulation:

Screen Shot 03-31-15 at 07.58 PM - CopyScreen Shot 03-31-15 at 07.58 PM 001 - Copy

I was thinking that the movement of our character’s cloak and fins could possibly be simulated in a similar way to the flag or dress tests that I did. They could be modeled as separate pieces of geometry which are then turned into nCloth and parented or nConstrained to the main body (like in the pictures above). Do you think that this type of realistic simulation would fit okay with the style of our world?

These are notes I took to remind myself of points so they might not make a lot of sense out of context:

  • nCloth always works in meters regardless of what unit Maya is set to (so 2cm in Maya is read as 2m by nCloth). This applies to other Maya dynamics also.
  • Adjust playback and max playback.
  • Turn objects that need to interact with the cloth into ‘colliders’.
  • Nucleus (in outliner) controls various things like: wind, air resistance, gravity (9.8m/s^2).
  • values of cloth and collider are accumulative.
  • The importance of scene scale: change the scale attributes in the attribute editor of the nucleus to .01 (100m/100=1cm).
  • Increase the quality of our solver: change settings in nucleus node e.g. steps. Change settings in nClothShape ‘quality’.
  • The input mesh is the original piece of geometry before nCloth was made.  The ncloth is a duplicate of the geometry.  Don’t apply commands e.g .smooth to duplicate mesh. The input mesh is original geo. The ncloth simulation is calculated from the original geo, through the nucleus node and onto the NClothShape then fed back out to outputcloth which is displayed. Therefore apply commands to first part of node chain (original geo) so it can be factored into calculations.
  • Exploring dynamic properties: If scene is modeled in cm you need to adjust the ‘lift’ attribute of nclothshape. Also adjust stretch and compression resistance.
  • simulate movement through ‘local force’.
  • how to make it feel more like the material? collision: ‘thickness’ ‘solver display dropdown’. collision surface on both ncloth and colliders. ‘self collision thickness’. ‘self collision flag dropdown’ vertex suffices.
  • Get rid of unwanted jittering and noisyness: nucleus solver attributes. nclothshape quality settings. ‘damp’ sometimes works. Attributes can be keyframed at certain stages where there’re problems.
  • high substeps and max collisions might be needed to get rid of jitters.
  • keyframe the ncloth inputs ‘is dynamic’ on/off.
  • working with constraints: constraints can be applied directly on ncloth. Apply transform constraint to vertices at point of force. Parent locator to mass that moves. Use ‘remove members’ to remove vertices. from constraint.
  • Creating tearing cloth: select area to be torn with lasso and create ‘tearable surface’. See stretch resistance under ‘dynamic properties’. Use dynamicConstraintShape to tune the tear.
  • Dynamic property maps: e.g. stretchiness that varies over object or certain parts that bend. wrinkle map e.g. from a file of painted wrinkles. For animated wrinkles, connect e.g fractal instead of file. Edit attributes to make longer shapes. Keyframe the offset for motion. ‘Wrinkle map scale’ influences strength of wrinkle map effect.
  • simulating cloth on a moving character:select vertices and then shift-select the collider surface that the vertices are to follow then choose ‘point to surface’ constraint. Select bones then ‘select -hi’ in MEL to select rest of hierarchy of bones and move t-pose to something like -50. That way we can start ncloth simulation before main body animation (gives time for cloth to relax).
  • Identifying potential problems: frames when geo passes through cloth will confuse the cloth simulation. In ‘evaluate nodes’ disable ncloth. Scrub animation for points where two pieces of geometry go through eachother.
  • Caching nCloth simulations: You can manually adjust values in the attribute editor  e.g. stretch resistance or use Maya presets. ‘Lift’ might need to be changed to match scene scale. Cache the simulation so that when it’s opened on another computer you can be sure it’s the exact same simulation. ‘create new cache’ bakes animation into external file. ‘delete cache’ in order to preview changes. Having a cache allows you to hide the negative frames on the timeline as the cloth will remember the more relaxed position at frame 0.

Slán Character Development

week_09_character_01_resized week_09_character_02_resized week_09_character_03_resized week_09_character_04_resized week_09_character_05b_resized

These are some more sketches I did for our character. I wanted to get your feedback before I go doing orthographic drawings for modeling from. Are any of them close to what you’d like? Abigail’s drawings had more chibi-like proportions so is it okay that I’m drawing with slightly more realistic/longer proportions? I also think that the cloak makes a nicer silhouette when it fans out at the bottom a bit instead of wrapped tightly. That wouldn’t be too hard to animate would it? I imagine that it will work fine if we play with a heavier ncloth draped over the body. Also do you prefer the neck being covered like in the second drawing or the cloak following the body more closely? Any preference for the tail? Is the length okay in the last drawing?

I haven’t drawn any finished looking Celtic patterns. I thought it’d be nice to have a clear connection between the patterns on the rocks and the patterns on the character’s clothes and face. Abigail and I were also discussing how it might work well if the character is very light coloured (maybe glows slightly) to stand out against a darker background? Like in Ori and the Blind Forest?

After we decide on the character (so there’s something to start modeling from) I can quickly add some backpack designs also.

Character and Background Contrast: Ori and the Blind Forest

I was thinking about what Andrew Deegan had been telling us about good game design having a character with a strong and instant read against the background. Although we’ve been through this a lot before when painting values it might be easy enough to forget when we move to 3D so we should pay extra attention to this. I stumbled across this game that might be some good inspiration to us as it has a very light/white character against a dark background: Ori and the Blind Forest.

Abigail even suggested that we could give some similar treatment to the path that Anam is travelling on to guide eye more….just like in games!:)

Digital Tutors – Mastering Topology in Maya

I figured it would be a good idea to learn as much as possible about topology before we even begin modeling our character for our Hard-soft animation. The ‘Mastering Topology in Maya‘ series on digital tutors has been good so far for getting practice and seeing all the things that need to be taken into consideration. This will also be useful for modeling the head in our Imaging and Data Visualization module.

Here are my notes:

  • N-gons: Are polygons with more than 4 sides.

Things to consider when eliminating triangles:

  • Think not as ‘deleting an edge’ but ‘moving an edge’. Create new edges and either delete triangle’s edge or select vertices and merge components.
  • If triangles are close together, can they be eliminated by creating an edge that extends between them? (either in an enclosed area or looping entirely around symmetrical geometry).

My attempts at removing triangles:

Screen Shot 03-23-15 at 10.35 AM

I solved this a different way at first but it’s good to look for end edges close by that you can extend a new edge down to.
Screen Shot 03-23-15 at 10.36 AM

Screen Shot 03-23-15 at 10.40 AM

Triangles close together can be solved by connecting an edge between them. Seems easy now but I over complicated it at the time as usual.Screen Shot 03-23-15 at 10.42 AM

Screen Shot 03-23-15 at 10.43 AM

Merging vertices with the ‘merge components’ command is useful.Screen Shot 03-23-15 at 10.44 AM

Working with poles:

  • Poles occur when more than 4 edges converge on a point.
  • Is the pole in a place that will be deforming or static? 5 pointed poles usually occur at the edge of mouth and eye loop groups. Consider which poles are acceptable and which need to be changed.
  • 6 pointed poles can be turned into two 5 pointed poles.

Screen Shot 03-23-15 at 12.41 PM Screen Shot 03-23-15 at 12.48 PM

Edge loops:

  • Edge loops match what the model need to do i.e. let you know where and how you can deform your polygons.
  • Deformation needs resolution e.g. loops which describe the nasolabial folds/laugh lines. Loops can be rerouted to add extra resolution to these places.

Screen Shot 03-23-15 at 02.46 PM

Screen Shot 03-23-15 at 02.47 PM

Facial loops:

  • The loops around the eyes and mouth (and rest of face) should follow the flow of muscles underneath so facial geometry can deform correctly.

I liked the method of drawing the facial loops with the cv curve tool and then converting to polygons which could be extruded. This is my attempt at drawing facial loops over this drawing from digital tutors:

Screen Shot 03-24-15 at 08.18 PM 001I messed up quite a bit and need to try this more but at least I have an idea where to start now with something complicated like the face. This was the solution from digital tutors which had a less puckered look:

Screen Shot 03-24-15 at 07.04 PMI’ll need to study more references to get a better feel for what way the loops curve around.

Geometry reduction:

  • How can you connect an area with high density edge flow to one with lower resolution? e.g. the front of the face has a high density to accommodate facial deformation compared to the back of the head which is static.

Screen Shot 03-24-15 at 09.26 PM

This is my attempt at connecting the hand to the arm exercise:

Screen Shot 03-24-15 at 10.55 PM

My connection would have been cleaner if I had of started by examining the mesh more and seeing that each finger has three edges which could converge neatly. I also made the mistake of converging the mesh too close to the wrist where more deformation is going to happen than e.g. the back of the hand. This was the digital tutors’ solution:

Screen Shot 03-24-15 at 10.18 PM

Resolution for animation:

  • resolution is needed in areas that bend.
  • Look at sketches and concept art of the character. Discuss what way the character will need to move. Will areas need resolution for squash and stretch?

Akira Kurosawa – Composing Movement

This video from the Youtube channel Every Frame a Painting’ discusses how Akira Kurosawa uses movement in his shots. For our Hard-Soft animation we’ve been discussing the merits of either having calm or stormy weather. The stormy weather definitely contributes to the pathos of the shots but then the still and quiet can have an equal amount of power too. Visually I think some movement caused by the weather would have more appeal, even if it would be just the slow advance of rolling fog (like in Kurosawa’s scene), the wind in the trees or the swelling of waves (not necessarily fast/angry but large and powerful?)

Akira Kurosawa – Composing Movement (2015)

  • Movement of the weather and elements.
  • Movement of groups can amplify an emotion.
  • Repeated movements of a single character can become recognisable.
  • The camera movement has a beginining, a middle and an end.
  • Kurosawa cuts on movement.
  • ‘If you know what the scene is about, try to express it through movement’: How is the character feeling? Is there any way they can convey that by moving? Can background weather/elements convey how they’re feeling?

Camera Movements

Helping to plan shots for the Genome Tower has made me realise all over again how much I need to learn about cameras and planning moving shots alone. It will be particularly useful to learn about for our Hard-Soft animation project also.

These shots from Solaris (2002) were particularly helpful in planning our camera shots of the Genome Tower:

We also looked at Interstellar (2014).  In this video they talk briefly about visualising the black hole which is interesting:

This Telegraph article and video on Gravity‘s (2013) behind the scenes is also very interesting! Haha, 7000 years to render on one computer….

We tried doing a trombone shot also but it didn’t fit very well so we decided to leave it out. I found this short video on the history and science of the dolly zoom to help me understand it a little better!:)

2001: A Space Odyssey also had some gorgeous shots of the large and slow sci-fi feel that we were going for:

Belfast Genome Tower

This is Kerry, Christian, Matthew and I’s finished project for the building Belfast design brief.

For the animated infographics we modified a template we got from to reflect our own collected data:

The song is Emerge From Smoke by Shlomo which Kerry found. Unfortunately youtube mutes it every time I upload but it’s too good of a fit to replace.

We spent a long time on the ideation and conceptualization stages and even spent quite a while considering carrying out our own data collection on more unusual statistics that we could base our designs around e.g. head size, coffee orders, fashion etc. In the end we settled on the idea of storing the population’s genomes as data on servers and did a photo mash of our sketchbook pages in photoshop. Matthew sketched the image below from what was produced:Matthew_sketch

The top part is for pumping water around the helix cooling system and the middle part is a heat sink. Obviously we got some design inspiration from Watson and Crick and the DNA double helix for storing our data. In the final design we have 80 spheres (2 on each rung) and within each sphere are 64 servers storing 8TB of data each. These are some shot and lighting tests that Matthew did:

We liked the effect of varying depth of field and pulling focus but it added a lot of time to the render and the view in the view port was a lot different than what was being produced in the mental ray renders. We decided not to use it this time but it’s definitely something to consider for the future.

These are some design and lighting tests for the heat sink I modeled: