I spent this week polishing and finalizing the ‘Satisfying’ animation. Having shown it to a lot of different people they were adamant that the machine was a donut machine and many suggested I turn the non-specific ring into a donut as many would find that addition more satisfying.
Taking that very common piece of feedback into account I modeled a donut based on the original torus and it was here I found an opportunity to use MASH. As mash has a ‘placer’ node that takes an object that has been converted to a MASH set up and allows it to be placed on any surface. Using that function I created a small cylinder that would be used as a sprinkle and using the placer node added it to the surface of the donut.
Donut model
I also made use of the modelling tools in Maya to add slight imperfections to the donut and it’s icing to make it slightly more realistic. Afterwards I parented it to the two original rings so I wouldn’t need to re-animate anything.
The hardest thing for this week was setting up the mash placer node to function correctly as several times I tried the sprinkles would be placed sticking upright into the surface of the donut which would look bizarre. I also did try importing the exr into after effects to add some smoke and light in post to give the idea of ‘something’ happening to the donut after its gone down the tube however I could never get it to look good. Either the smoke disappearing looked unnatural or the glowing from the hole seemed bizarre. I decided to instead abandon the idea and focus on fine tuning what I already had.
The Look Development Artist, shortened to Look Devs, has an interesting role that comprises knowledge from the fields of lighting, texture and creature design. Unsurprisingly this role also works alongside these artists. After a concept artist creates a design for a creature it is a Look Dev’s role to take that work and think about how it would exist in 3D space. What are the texture of its skin? What does it look like when light bounces off it? Do different parts of the body reflect light differently? Like in the case of fur vs claw as those are composed of different materials they would then reflect differently. How about weather conditions? What does it look like when its wet, when its windy. All these considerations are the Look Dev’s field. I have retrieved this information from the initial introduction from ScreenSkills.
As the role is not an entry level position the skills and knowledge needed are much higher. They will work with a wide array of programs and require substantial knowledge on all of them. Just from ScreenSkills these programs include Arnold, Blender, Maya, Metal Ray, Photoshop, RenderMan, Substance Painter, V-Ray and Zbrush. Of course these are all listing external generic tools that are available in the market. Companies may have their own internal tools that these artists would need knowledge of. One such example is from Walt Disney Animation Studios, according to their website, Tonic. An in-house developed tool specialized in the generation of hair.
from https://disneyanimation.com/process/look-development/?
Though one responsibility that is highlighted on conceptartempire.com is that a Look Dev will work to maintain that the product, weather a game or movie, maintains a consistent appearance. As such they need an intimate understanding of the visual language that the product is utilizing as to keep all items as consistent with one another as possible and allow the vision of the clients to come forth.
They are also responsible for developing a material authoring pipeline, IE choosing what software needs to be used for what as various programs will have their strengths and weaknesses and it’s the Look Devs vast industry experience that will help in establishing this pipeline.
This is a kind of role I may find myself in the future depending on what I ultimately decide to pursue in my VFX career. It represents a culmination of years of work, study and experimentation requiring vast amounts of knowledge of not only a variety of programs but spheres of knowledge.
This week this is what the responsibilities of my other group members were:
Sam – Continued updating and maintaining our schedule alongside development of concept art and other design aspects for our game.
Noah – Continued to greybox the levels for our game and creating the game world our players would play around in.
Anson – Continued on refining the procedural animation of our cat and adding combat functionality like swiping.
Patrick – Composed the main theme of our game that we will use as music in game
Daniel – Collected foley for our sound effect use
I personally started to develop the cat model that would be used to replace the current stand in model. While I have in the time given so far made a basic outline for the cat it is far from complete. While I originally wanted to finish and rig it this week was filled with work from other courses and complications from issues on my end from misunderstanding priorities causing me to have to split my time up significantly.
Cat
This was most easily my busiest and toughest week of the project so far and likely will be the toughest of the project in its entirety. With not only how difficult modeling the cat has proven to be taking longer than I had originally planned leaving me further behind on it than I wanted. But in addition to all this the overwhelming work load from my other classes in addition to me having to go back to the building models to attempt to make the hollow as I had misunderstood the pipeline and was unintentionally holding up the creation of a first proper prototype. Despite working several hours to make it ‘thick’, IE turning the shell to have density to it so when its shattered it is a wall that breaks apart instead of a single solid block. Ultimately I couldn’t get it to work properly and the results were not what my group wanted. Instead I passed over the models to my group whom broke it apart in Blender using the technique they developed for their test buildings.
One thing I did do before passing the models over was organizing the UVs to function with the shaders. An example is here below.
UV of a building
The idea being that the shader functions by breaking the UV area into 4 groups that we determined. Top left is primary material, bottom left secondary, top right for windows and bottom right for misc objects, mostly doors. This was set up so we can procedurally texture the models so we can have a large color variety.
My failures to meet deadlines and the slow progress of certain models alongside my time being stretched across four different projects contributed to a burn out that I had this week. It has made me feel as if I was a burden to my group and a failure. That being said I also know that this is my first time doing anything like this and my progress from knowing nothing about modeling to where I am now in the span of 4 months is something I should be proud of. I know where my failures lie and I will do my best to address them. I can only be thankful of how supportive my group was and understanding with my issues.
This week we focused on green screen techniques. The multiple nodes that Nuke provides for removing green screen and how they differ. With that in mind we also went over several green screen techniques to fix common issues like spill. Alongside that we learned how to average out a green screen to make it easier for the nodes to remove it cleanly. That being said I will have to go back and review these lecture notes as it was a lot of information and some techniques weren’t properly digested. I would say this is my largest and most consistent issue when it comes to learning Nuke. Given our short time in class there’s very little time to ask questions about specific issues we face in Nuke. These issues are of course personal and from a variety of potential problems that need time to be tackled and may not help most other people in class.
We had homework to test out our green screening techniques. Pictured below is my final alpha for the green screen which to me is pretty clean. There do appear to be some issues with it though it doesn’t appear to be reflected in the final comp so perhaps they aren’t an issue? One thing I was wary of is that the green box on the machine behind the mad scientist was having its green removed. To fix that I just roto’d the box and tracked it very simply and after the scene was green screened just placed it on top so as to maintain the color of the box. One big issue with this is that the de-spill hasn’t worked as intended and you can still see some green on it especially when his arm swings and you can see green around the balls.
Scene alpha
In my garage this week I was aiming to add texture to the wall of the scene to make it more grimy and rundown.
Grime 1Grime 2
While for the most part its successful I have a continued issue where my roto shifts and moves around so it looks strange and as a result elements in the scene like my poster and spray paint don’t remain fixed in place. While I try to fix it through importing the tracking marks from previous files and making my own again it results in even worse tracking than when I simply placed the elements myself into the scene. I also lack the time to redo the rotoscope with proper techniques and to experiment to fully grasp nuke’s 3D scene roto.
While last week was focused on the development of the basics of the animation this week I spent focusing on another important element of satisfying scene composition and that was color and texture. My original vision had me wanting to strive for very bright, neon-like colors as I felt that the outlandish color scheme would provide a more abstract air to the piece. Unfortunately despite my best efforts the piece never looked how I wished and would more often prompt people to avert their eyes than engage.
Original colors
Though the exact color choices were poor they were at least grounded in theory as I used ColorHexa.com to find complimentary colors. At the suggestion of others I went for more muted colors and came to this.
Adapted
As you can see I did still stick with the original color ideas of pink and green though shifted them to be more muted and I think it was a significant improvement.
Another big thing to tackle with the piece was lighting. One thing I noted while watching the videos is that the lighting for the other pieces was that they were flat in their lighting. Few shadows and what shadows are cast are pale and indefinite. With that in mind I created this lighting set up to imitate it.
Light Set up
3 spotlights light up the main scene with a wide dispersal cone to get everything as evenly lit as possible. Then I used soft area lights over the scene and the background to balance out shadows and create that uniform lighting over the entire scene which can be seen in the shots above.
As I have delved further and further into the world of modelling one skill that I have continuously wanted to improve upon was texturing. To be able to not only create a model but bring it to life with colors, textures and imperfections that make models feel real. As such I have taken up interest in the Texture Artist entry level position.
Before delving into the specific role of the texture artist it should be first addressed that, as supported by ScreenSkills and my own friends in the VFX industry, that in smaller companies there is often no distinction between a Texture Artist and Modelling Artist. Due to their smaller sizes people often must work multiple roles and as such these roles are often combined into one. However for the sake of this article I will research it as much as a distinct job.
A Texture Artist’s primary role is to, either through creating it themselves or through stock assets, texture models provided to them by the Modelling Artist. This is assuming that these roles are distinct. The Texture Artist’s goal is making whatever object they are texturing is to be as realistic as possible in some cases, according to ScreenSkills, and in others to be stylized to function within the piece they are working on.
Primarily texture artists will work in Graphics Software such as Photoshop and Dreamweaver for creating textures from scratch. Often, they will have piles of stock images of objects with the textures they want to use as reference for any textures they create. They also make use of 3D modelling software such as Zbrush, Blender and Maya. They will use these software to edit the models provided to them to texture them, either via bump mapping, textures or directly onto the model like adding individuals scales on an alligator physically into the model.
Example of a textured model, scales are built into the model. From:https://sketchfab.com/3d-models/nile-crocodile-swimming-8bdc3a1551fb4d9d9a58e56b9385bd22
Texture Artists need to be skilled in the fundamentals of art and photography to understand color, texture and lighting and how all these elements are affected by the lighting of a scene and the camera employed by the film.
Sally Wilson, the lead Texture Artist on STAR WARS: The Last Jedi, supports this as in the following video:
she peaks about how to properly texture any asset they are assigned. They must have a good understanding of the materials themselves, as a something metal will function with light far differently than flesh. The material itself will also have different kinds of imperfections than others, again referring to flesh this may be things like dimples where as metal will have scratches. All of these things will change how light interacts with the subject.
I want to learn how to texture and in pursuit of what, very likely, would end up as a generalist position it would important for me to learn and understand the skill sets required to texture properly. This research has helped me understand where I can begin to develop my skills as a Texture Artist and pursue it in my own work here at the university.
This week was spent mostly working on additional models for the buildings that would make up our map and cleaning up the models I had made last week. I, of course, did further research on the building styles and finding examples to base my work on and updated the mood board accordingly. I will post images of the models I have produced thus far.
Afterwards I exported them into FBX files so that our level designed could incorporate them into Unity and be able to build a map that would be more accurate to the game’s appearance instead of using the test model he had been using till now.
We had a meeting on Friday as well where we went over our plans, what we we have done this week and what we will be doing next week. We also discussed and finalized some discrepancies between us on the finer functions of the gameplay for the game so that we all had a clear and unified idea.
Next week I will be focused on modeling, rigging and if time permits it texturing the cat.
This week we focused on CGI compositing, specifically the titular machine this project has been named after. The machine was already set up in its EXR to match the movements of the scene it was placed in perfectly so matching it was not our concern. Instead our concern was compositing via grading, color matching and lighting. The lesson went over using the multiple passes, tips and techniques on color grading and a focus on understanding that where you take your grades from is important. A single shot has multiple lighting conditions.
Lighting conditions
As an example in our project the front room is brighter than the back. So if you graded the machine based on the front room then the machine would look out of place as it doesn’t match the lighting conditions of its actual setting in the scene.
To encorporate my work into the scene provided with us I exported my own clean plate and in addition to my clean plate I exported my rotowork as its own layer. By layering these I was able to make it that the machine appeared behind the wall.
My rotowork
That being said it taught me something quite valuable that I was missing. Breaking up my work into small chunks. I had issues previously in my roto because of the fact that as I struggled to do it all in one scene and so the roto would take the uncleaned plate and that would mean to achieve the final look I had to shift elements around in 3D space when I shouldn’t have needed to. Instead I should focus on completing scenes step by step. I should have, after cleaning the plate, exported a version to then serve as the roto plate. In future I will do my best to keep things simple and layered.
In addition to what had already been done for the scene we were tasked to add to it. As I thought it would be useful to know what the machine does to create a story around it to influence the scene I had asked about it. There was no such purpose to it so rather than come up with one I decided to create a scene around keeping it vague. So far I have added a bloodstained graffiti begging no one to turn it off and created a poster in the scene. I still need to add more but I am thinking on what further additions I can make.
This marked the first week in our next 4 week development project: The Oddly Satisfying animation. While the title of the project seems to be quite vague it is based on a series of animations that can be found online that have become somewhat generified. An example of a compilation of such animations can be found here.
Oddly Satisfying
While they can vary wildly in tone from realistic to abstract in both content and color palate I have decided to go with a more abstract piece as to further differentiate this from the last project where I aimed to create a more realistic looking piece.
While I was stumped for a while on what to actually produce I eventually had an idea for a machine that I would deem oddly satisfying that I did a quick sketch of.
The basic idea would be that a ring flows down from the top, onto an arm which then has the ring slide, just perfectly shaped to fit, into the tube and from there the animation would loop infinitely. The abstract comes in as the machine doesn’t resemble anything all that realistic and its purpose is entirely left to interpretation and the oddly satisfying elements would come from its motion down the arm and the tube.
Initial Prototype
While I eventually figured out how to create an initial prototype I had to modify quite a bit as the perspective of the machine revealed more of it than I had originally planned and while I wanted there to be only one central piece that all arms connected to I, after animating it through a combination of edges converted to lines and line constraints, that the donut would have to phase through 3 other arms to slide down the first arm. This would be too egregious of a breach of realism and turn the animation off from being satisfying. Another issue I had was, to fill up space, it was recommended I added a plate that the tube could come from and make the scene more visually interesting
Loop not possible with unsymmetrical design
As I hadn’t made this first version in the center of the world or with any precision I was left with the choice to start again and be careful setting it up or attempt to make it symmetrical by hand. After trying by hand I realized it’d be useless to do so and instead recreated my progress in a new scene where I was careful to make everything symmetrical.
Initial StateAnimation Moving
Another facet of this week’s class was MASH. A powerful motion graphics system in Maya. While I experimented with it I did not ultimately use it in this current version of my animation. This is something I want to change as I should try to learn as much of Maya as I can so this weekend I will watch MASH tutorials to get a wider grasp on what MASH can do so I can implement it.
I also made some slight modifications to the camera work of my Rube Goldberg machine and re-rendered it. As my instructor Nick pointed out that my cut goes against the main idea of Rube Goldberg machine animation where it is meant to be a single continuous shot to show off cause and effect.
As part of my initial research I created a padlet document to serve as a mood board that would be open for all my team members to contribute any pieces they felt I should be aware of in guiding my future modelling work.
Alongside simply finding pieces of work by others about neofuturism and brutalism I also found design blog posts about XCOM 2’s city design. Since this was one of the inspirations that I was given by the team I figured reading and understanding these design blog posts would help me understand what I was looking for in my work.
In addition to these blog posts I found two articles that could help me in understanding the design philosophies of the two different architecture styles.
First was “The Aesthetics of Science Fiction. What does SciFi Look Like After Cyberpunk?” written by Rick Liebling posted on Medium. This document goes over several architecture’s connections to sci-fi and our changing vision of the future but does directly speak about Brutalism and it’s representation in the genre giving me more information to base my designs on.
Secondly was A century of Futurist Architecture: From Theory to Reality by Farhan Asim and Venu Shree which directly speaks about neofuturism’s design principles, namely a focus on ‘modern’ inventions in technology for its material such as glass and lightweight aluminum with a focus on abstract design.
I also looked into how to make use of the Shatter function in Maya via its online documentation and youtube tutorials which was simple to understand and should be easy to implement into my design pipeline in future. This youtube video provided me with lots of information and demonstrations.
I also watched a few tutorials on rigging to refresh myself and get my accustomed to rigging an entire skeleton as opposed to just a jaw and face as I had done last semester.
This video provided me a lot of information on rigging and animation and how to set up constraints and controllers so that animations need not be done directly with the joints which would be an issue. As modeling the cat would be my first major foray into organic modeling I looked up some quick tutorials online and this one taught me a lot about subdividing my models after creating basic shapes and extensive use of the crease tool to enhance both my control and the organic look of the model.
I also looked at some quick tutorials on low poly modeling for cats to get an idea how others in my field have done it which I can then use to help me when constructing my own. One such tutorial was this.
Lastly I also created some models this week at the request of my teammates so they could have more to present in an upcoming report they had to give. The models are shown below.
Brutalist bigBrutalist smallNeofuturist smallNeofuturist big
The models still need improvement and in future I think what will give me the most trouble is scaling the models appropriately. As I have not had to make multiple models in separate scenes that in the end need to be next to each other in a separate environment I am not used to having to worry about scale.