I went through the following tutorial on Puralsight to learn the basics of using XGen. I then experimented with this to try and create a soft fleecy body for the Octaves. Learning a new system and trying to apply it in time for a deadline was quite challenging. University projects were good preparation for this but I found that there was an extra level of stress involved in the workplace.
Creating Dynamic Fur with XGen in Maya
An important note that wasn’t explained in the tutorial is about file paths. XGen likes to create folders for each modification that is applied to the groom. If any of the file paths to these folders are wrong then the groom will disappear. I found that XGen would sometimes make errors and save folders to the wrong collection directory. It’s therefore important to check that the initial folder creation is where you want it to be. I also found it helpful when working across multiple computers to make sure that all the file paths are relative. Instead of having the full path directory, start the file path with “$DESC/” so that the ‘description’ folder will be searched from whatever collection the XGen is set to.
XGen also seemed to have a bug for me. Whenever I imported a collection into a scene, two ‘description’ folders would be created within the ‘collection’ folder. One of these folders would contain all the information needed but would be ignored by XGen while the other folder would be read by XGen but contained an empty setup. This was fixed by a simple cut and paste. Maybe there was a reason for this?
My notes from the tutorial:
Creating a Sci-fi Alleyway. Detailed Environment Techniques with Devon Fay – great inspirational tutorial on the gnomon workshop.
What a beautiful piece of art Fay’s “Sci-fi Alleyway” is. I love this feeling of rain-slick, beaten up alleyways filled with old tech and Japanese references. The image has really stuck with me and I can’t help but think of it when I look down narrow alleys in Belfast. Even during the weekend, I saw a poster in The Ramen Bar in Dublin and immediately thought, sci-fi alleyway!
This is the VR (or AR?) dream.
Summary of my notes from the tutorial:
I was just beginning to research how we could create the grass in our stargazing scene when I came across this method using paint effects. In this tutorial, Hermes demonstrates how you can simply paint clumps of grass with the stroke of your pen/mouse. These grass clumps can then easily be modified and animated by using the turbulence tab under the attributes. I will research more into how flexible this system is for creating our own style, particularly for adding elements that look like brush strokes, or on the opposite end of that, very flat and graphic.
Maya 2014 tutorial: Animate grass to react to wind
In order to research more into how Paint Effects can be used I looked through chapter 10 of this book:
Mastering Autodesk Maya 2016
These were some of the questions which arose before or during my reading through the chapter.
Chapter 10 Paint Effects pg 419 – 471
What do I need to figure out from these chapters?
- How do I make my own custom shape to paint with?
- Can I shade these whatever way I want to? How can I make a custom grass shape and shade it the same colour? And then light it?
- Do they look good with light rendered with Maya software?
- How can I animate these with ‘turbulence’ to match the wind in the scene?
- What surface will I paint these on? Will the ground be visible?
- How can I shade and light the grass so that it only fluctuates between two colours when it blows in the wind. Dark grass that catches hard light. – light link only the grass, make the ground the same colour?
- Do you need to cache the wind/turbulence animation? Will the animation be retained in history?
These are some notes which I took while reading. There are still some areas which I haven’t covered yet e.g. Sorcha was telling me about painting along curves to create hair. At least I know that it’s possible.
Painting on 3D objects pg 425
- You can paint in any camera or through the paint effects window.
- You can create a bumpy/organic surface by lofting a surface between two dynamic hair curves (explained in a different chapter).
- Paint Effects can be used with pressure sensitivity on your stylus using the pressure mapping settings e.g. mapped to scale. I didn’t fully understand how to make this work at first so I searched online and found this video which helped:
Pluralsight Creative (2013) Top Tip: Using Pressure Sensitive Features Within Maya
- Both nurbs and polygon objects can be painted on.
- Make 3D objects “paintable”. Objects must be UV mapped in the 0 to 1 space. You can paint on a moving surface such as water. You can generate paint effects on a nurbs curve.
- A stroke node and a transform node is created when you paint. The brush node is also connected to the strokes and retains a construction history.
- Each stroke has its own brush node but you can edit brushes at the same time with brush sharing.
- You may need to make changes in the node editor e.g. to change the type of curve controlling the stroke.
- A different curve node is created depending on the type of surface which is painted on.
- Paint effects is a development of L-systems; mathematical algorithms used to simulate living organisms.
- Some brushes add shapes to the scene. Others work by affecting the appearance of geometry behind the brush. E.g. ‘erase’ paints black holes in the alpha channel.
- Mesh strokes don’t render in Mental Ray. However, strokes can be converted to polygons.
Rendering Paint Effects
- Light linking does not work with paint effects.
This article from Autodesk was helpful to load the Paint Effects shelf and add custom brushes to the shelf: Prepare to use Paint Effects.
A lot of the ideas we discussed for our sci-fi scene contained destroyed objects. This tutorial has a lot of helpful pointers for what you should be thinking of when modeling destruction. One of the most useful methods I took away from this was that an area should be extracted from its surroundings first if a lot of resolution is going to be added e.g. if you’re going to splinter the end of a piece of wood.
This is the tutorial:
Modeling Architectural Destruction in Maya
Look at reference to see how different materials respond to stress. Wood splinters, concrete breaks in a particular way, glass etc.
Select faces of section you want to modify and extract. Cut a jagged line with the edgeloop tool and extract the cut area. Fill the missing geometry around the areas that have been extracted.
Re-use pieces that are broken off as rubble for later. Focus on the type of material you are cutting up and think of what tools you can use to do this. What kind of damage do you want to do? From where/to what extent.
Use ‘fill hole’ or ‘bridge’ etc. to fill gaps that are made from cutting away jagged geometry. Insert multiple edgeloops where extra resolution is needed for detail. Where on the model has the stress been applied to? – only destroy these areas e.g. above or below. Is the stress damage on the front face or has the object been broken on the side?
By destroying the model you are exposing areas that would not normally be seen e.g. under floors and between walls.
Metal framework bends under stress. Metal will probably not have little chunks cut out of it the same way that concrete does. Wood will have sharp snaps that splinter and therefore require higher resolution. Electrics will be pulled out from their settings and will have trailing wires. Some wires will be connected and some disconnected. Use the cv curve tool. Displace and move the created curves out of the same plane. Resize the nurbs curve to adjust the radius of the extrusion. When modelling destruction, try to maintain the ratio of volume between missing sections and the amount of rubble. Buckle areas e.g. the floor. You can add more resolution to nurbs by using ‘rebuild surfaces’ (or by individually adding isoparms). Look for sharp versus rounded edges in your geometry.
Christian told us about modeling using vectors imported from Illustrator. I found this tutorial to help me out:
Digital Tutors: Creating 3D Geometry from Illustrator Curves
This tutorial helped me figure out how to turn my platform drawing into a vector shape using Photoshop:
How to turn hand drawn icons into vector shapes in Photoshop
These are the notes from a tutorial I did in order to to test out whether we could use this method for adding detail to our monster characters in Maya. The idea is to make a low poly mesh in Maya and then generate a normal/displacement map from the sculpted detail in Mudbox which can be brought back to Maya for rendering.
02. Importing and Exporting Geometry
Exporting as an .obj file exports the geometry and UV map etc. .fbx contains more detail. The grid size in Maya and Mudbox don’t match. In Maya, freeze the transformations before exporting. When exporting from Mudbox, check window>preferences and check the settings e.g. fbx blendshape. Then select the object and >export.
03 Importing and exporting texture file
You can export your paint layer by right clicking and exporting maps. If you want to export your paint layers separately, make sure that ‘flatten layers on export’ is turned off in the preferences. Paint the desired layers. Then select the object and export as an fbx. This will automatically set up a layered shader of the different paint layers.
04 Transferring normal and displacement maps.
Go to >maps>texture maps> new operation. A normal map is like a bump map that allows you to work with a low resolution model but makes it appear highly detailed. The target model is the level 0 low resolution in mudbox and the source model is the highest level sculpt. Change the method to subdivision instead of raycasting (as we are working with the same mesh that detail has been added to). Choose ‘tangent based’ for geometry that is going to be deformed. A normal map is only the illusion of detail where as a displacement creates geometry. For exporting displacements – export as a 32bit exr. Plug the file into the displacement of the material. In order for the displacement to work the geometry needs to subdivide at render time. Go into rendering editors>mental ray> approximation editors. Under subdivision hit ‘create’ a new approximation. In the attribute editor > set subdivisions to 03 (for example)
05 Using the’ send to’ features in maya and mudbox.
06 Creating base meshes in maya.
Meshes with topology problems such as non-manifold topology will not be able to subdivide in Mudbox. Try and create an even quad layout.
07 Working with UVs.
You can create UVs in Mudbox from the menu which will break each face up into shapeless coordinates. You can export UVs as an .obj file and select the object in Mudbox and import UVs. You can tile the UVs in Maya/Mudbox also so as to dedicate more resolution to each area. Paint layers are based on the UV layout but sculpt layers are not. You can however transfer paint layers from bad UVs onto good UVs for example.
08 Modifying and updating topology.
Sometimes as we sculpt, the flow of topology on the original mesh may not match up with our sculpted result. Reroute the topology to match the detail.
09 Adding New Pieces of Geometry
Send your geometry from mudbox over to maya as a guide for adding new geometry around it.
10 Importing Joints and Weighting
A rig set up in Maya can be imported along with the model. The rig is accessed in Mudbox through the ‘pose tools’ tray along the bottom. A model can be exported as an .fbx – make sure the ‘import rig’ option is ticked in the preferences for .fbx.
This is the mesh that I drew on top of Siobhan’s face for reference during modeling. I had also aligned the front and side views in Photoshop but there was still some areas like the nostrils that didn’t line up perfectly in Maya.
This topology/facial articulation reference from hippydrome.com came in very useful:
I figured it would be a good idea to learn as much as possible about topology before we even begin modeling our character for our Hard-soft animation. The ‘Mastering Topology in Maya‘ series on digital tutors has been good so far for getting practice and seeing all the things that need to be taken into consideration. This will also be useful for modeling the head in our Imaging and Data Visualization module.
Here are my notes:
- N-gons: Are polygons with more than 4 sides.
Things to consider when eliminating triangles:
- Think not as ‘deleting an edge’ but ‘moving an edge’. Create new edges and either delete triangle’s edge or select vertices and merge components.
- If triangles are close together, can they be eliminated by creating an edge that extends between them? (either in an enclosed area or looping entirely around symmetrical geometry).
My attempts at removing triangles:
I solved this a different way at first but it’s good to look for end edges close by that you can extend a new edge down to.
Triangles close together can be solved by connecting an edge between them. Seems easy now but I over complicated it at the time as usual.
Merging vertices with the ‘merge components’ command is useful.
Working with poles:
- Poles occur when more than 4 edges converge on a point.
- Is the pole in a place that will be deforming or static? 5 pointed poles usually occur at the edge of mouth and eye loop groups. Consider which poles are acceptable and which need to be changed.
- 6 pointed poles can be turned into two 5 pointed poles.
- Edge loops match what the model need to do i.e. let you know where and how you can deform your polygons.
- Deformation needs resolution e.g. loops which describe the nasolabial folds/laugh lines. Loops can be rerouted to add extra resolution to these places.
- The loops around the eyes and mouth (and rest of face) should follow the flow of muscles underneath so facial geometry can deform correctly.
I liked the method of drawing the facial loops with the cv curve tool and then converting to polygons which could be extruded. This is my attempt at drawing facial loops over this drawing from digital tutors:
I messed up quite a bit and need to try this more but at least I have an idea where to start now with something complicated like the face. This was the solution from digital tutors which had a less puckered look:
I’ll need to study more references to get a better feel for what way the loops curve around.
- How can you connect an area with high density edge flow to one with lower resolution? e.g. the front of the face has a high density to accommodate facial deformation compared to the back of the head which is static.
This is my attempt at connecting the hand to the arm exercise:
My connection would have been cleaner if I had of started by examining the mesh more and seeing that each finger has three edges which could converge neatly. I also made the mistake of converging the mesh too close to the wrist where more deformation is going to happen than e.g. the back of the hand. This was the digital tutors’ solution:
Resolution for animation:
- resolution is needed in areas that bend.
- Look at sketches and concept art of the character. Discuss what way the character will need to move. Will areas need resolution for squash and stretch?