Jumped back into unreal but I’m heavily procrastinating the game parts so I started messing with some of the environment tools in unreal. Specifically I’ve been poking at the grass setup tools (https://dev.epicgames.com/documentation/en-us/unreal-engine/grass-quick-start-in-unreal-engine). Not too complicated essentially, your environment drives a few inputs and outputs in your landscape material.
So I whipped up a quick ground texture in gimp:
Where each layer is a bit more grass ontop of the dirt:
Then I whipped up two quick grass and tree models in blender.
Then once you import both into unreal you can tie everything together using a “LandscapeGrassType” which is just a data holder for what you consider grass:
Then throw it into your material
The final result was surprisingly good. Obviously I could add more grass, touch up the ground texture, make a tree that doesn’t look like a plastic pylon. But I’m dumbfounded how far this stuff has come along from unreal 4.
I have a strong feeling you’re straight up not supposed to use this system to add trees (You can see in the video above that the bullets don’t collide with the trees). But I’m still happy at least that the workflow is quite simple.
Drawing onto the landscape is very easy also, If I wanted to make a path I can just draw the rock layer in a quick line like in the video above.
Now to the actual game stuff I’m thinking my next change into game-play will be moving things closer to kind of a pseudo helldivers mission where: 1.) Players spawn on a big map 2.) Enemies spawn a wave to attack the central spawn and players fight to survive/save the base 3.) Once the enemy wave is over players will leave the base to try to destroy spawners 4.) The next wave starts, the players run back to base and restock 5.) Rinse and repeat for all spawners 5.) Once the players destroy all the spawners a boss wave starts 6.)Players defeat the boss and the game ends.
Seems like a reasonable goal, it also sounds like a multiplayer game but honestly I dont wanna go down that rabbit hole yet. I have half of a multiplayer game made from awhile ago which seems like a good idea but it requires you to modify the engine build to pull in steamworks. My goal here is still to avoid using c/c++ so I can focus on just assets until I REALLY need to optimize. In addition making 3rd person models will probably require me to re-make a bunch of the blueprints which sounds horrible at this stage.
Its cold and I’m unhappy so you can join my in this cacophony of GENERATIVE AUDIO! (Please turn your speakers down, this is the loudest thing I’ve put up and I don’t wanna re-upload)
“How did you achieve this musically inclined symphony??” I’m glad you have asked this:
1.) Operator
2.) A Clipping and distorted 808
3.) A sub sound (Which is also operator I guess)
4.) A hit of that SWEET SWEET Compression
(also an LFO to play with the sub sound)
Now I’m going to scream into a microphone about my commute and complain about needing to drink more water.
I also found an old kindle (like a HELLA old kindle 1) and tried modeling it in substance and blender. Spoiler: it looks HORRID
I thought I could be lazy and not model buttons and just throw them in a bump map. But the geometry lords laughed in my face and made everything terrible (also I spent way too long making this color map in gimp):
The “Tris” number is something I forget to mention yesterday which is the thing that directly correlates to how hard a renderer (such as Unreal Engine) has to work to render something. Note the number is around 500 (which honestly might still be too high). The table yesterday:
Hopefully that reinforces the points from yesterday. In addition the UV maps are much easier to see, simple to understand and are broken apart by each component of the table: tabletop, legs, cross bar, leg flange and crossbar flange.
In addition proper use of mirror modifiers lets you do a single object:
Here’s the mirror modifier for the legs:
All in all its a table, it does what it’s supposed to and holds things off the floor. If I wanted to get interesting I can export each portion to unreal individually, re-attach things in game then when the player breaks the table it breaks semi-intelligently. However, I just wanted a table not a crazy physics object (yet).
Also here’s the table colored:
I the crappy wood coloring is from a quick noise texture in blender:
I have better wood/metal materials in the game so once exported that should clean itself up (hopefully).
Look at this table! Seems good right? …Right?……RIGHT?
I can promise you this is not good, you see when you make 3d models you can’t just “Make” the models. You need to plan and thing it through, the big things are: 1.) Keep the model as many simple shapes as possible 2.) Avoid complex faces that are not quads (i.e. 4 sided 4 vertex where it’s possible to rotate the face to be a flat rectangle 3.) Don’t repeat work (aka if you’re making a table make the leg once and copy+paste it…)
Now let me show you how I violated ALL of these checks
Above is the model view but I’m showing you the faces and vertexes of the model. You can almost immediately see the violations here. Everything is one piece, the legs are obviously done individually, there’s a weird slide in the middle of the table which serves no aesthetic value and if you zoom in on the top of the legs…
A TRIANGLE! However if you take the same pic in wire-frame you can see something much worse.
The bit circled in yellow is a hidden and folded face!!!! A big no-no especially for game dev where you’re trying to minimize the number of faces rendered on screen at once. Now you may ask yourself “Will. It’s still a table why do we care” the problem comes in when you make UV Maps.
To the uninitiated a UV map is a 2d projection of the external facing surfaces of an object, for a cube if you unwrap it you’ll get a chunky plus sign shape(more about this here: https://lkinulas.github.io/development/unity/2016/05/06/uv-mapping.html , which I skimmed but it seems better than anything I could write). Usually the way that adding textures onto a 3d model works is that you attempt to make the UV map in blender, then you dump into substance painter or equivalent program where you can paint onto the model in 3d, then you can dump the files out as text images. For example here’s the UV map of a keypad I made last year
Which here each of the keys have a single UV map, so therefore they have a single image to represent the color (known as albedo majority of the time, atleast with blender and unreal which both use “Physically based rendering” see https://en.wikipedia.org/wiki/Physically_based_rendering)
The 3d model holds the decoder-ring (which is the UV map) to convert the image file to properly positioned colors on the 3d model. Now you make now ask “hey everything you made for this game is literally a flat color and there’s no detail to mis-align” which is true. However with modern game engines they literally take the UV map and use that information to calculate how it interacts with light. An improperly made UV map can cause 3d models to flicker in bright lights, make them dark when they should be light and vice-versa. In the worst cases it can cause them to disappear entirely! In addition when you start doing things like bullet holes or blood stains on world models, an improper UV map will make the decal that is put onto the model become very mis-aligned from the intended position (think a bullet is shot at a table, the bullet hole should be where the bullet hit).
So now back to the table. The main issue here is that blender is actually pretty awesome at automating the generation of UV maps…Assuming your model is a simple geometry object (i.e. sphere, cube, cone, torus, plane) and doesn’t have the triangles. So if we try to automate the export of the table:
To give context of what you’re looking at, the left hand side shows each face on the table face, each grey square directly corresponds to each face on the 3d model on the right. If I select and individual face:
It shows exactly where the face will appear on the output image. So if you go back to the first image you can see that if I tried to just blindly apply either lighting effects or an image to the uv map we start hitting issues. For instance if I put a light directly on this corner
What will most likely happen is that everything in between those two faces will be lit in addition to the two faces in question. What’s in the middle you ask?
Not the areas you would expect to be lit. There’s also other factors here that I could get away with by getting creative with materials (which determine the direct lighting parameters like luminosity, albedo etc) but that would be much more work than just restarting, making a tabletop model, making a leg model and making a crossbar model then just copy+pasting the whole thing again.
With all of this you may also still be asking: 1.) Why are you making a table 2.) How does this fit into a game? 3.) Why are you blogging and modeling on Saturday night?
Answers: 1.) I haven’t made a model in a few weeks and I wanted to get my feet wet again 2.) Idk, places have tables. Wooden tables. Nice tables. With cross bars. 3.) It’s cold. I feel like crap and shut up
Also I’m straight up dreading adding more animations to the robot, I need to add a knife stab animation, a grenade throw animation and some kind of heavy shot animation. Unreal is very very bad at handling an updated model with updated animations. The process involves re-exporting everything as another .fbx file, deleting redundant animations, deleting the new re-imported base model and re-assigning each animation’s skeleton to the original skeleton and hoping you remembered the export settings from 6 months ago. If your forgot the aforementioned export settings you need to turn a knob keep doing the same process until things look kinda right. I’m debating on buying maya just to avoid this situation but I’d love to avoid learning new things until I get this silly game on steam.
Editors note (1 day later): I realized I wrote this whole thing without explaining why I did a bunch of dumb stuff: I was getting back to modeling and I just ran with the feelings for an hour rather than thinking it through. So I made a bunch of critical early mistakes which makes the model very hard to use.
I made a store stand that I’ll put a bipedal robot behind to be a shop owner. It’s currently not scaled or setup right to work. I think I’m going to make the lightbulb it’s own object that way I can swap colors on the fly. Right now I kept them in because I wanted to see what they would look like on the wire there. On the counter is a cardboard tip box and a metal lock-box.
Behind the Counter is a metal bucket (filled with what? Idk) and a generator that I’ll probably break out into it’s own object also.
In addition I wanna put a decal on the sign up top to say something like “robot killing weapons”.
Also I need to copy +paste the table a few times to make a full stand kind of thing. The proportions are definitely off so when I pull it into unreal it will be flippin’ huge. But before I do any scaling I’ll need to finalize everything on the table and fix up the wires so they’re not as blocky. Flat shading is fine but I think I can keep that vibe but put some wood texture on it for the next go around.
I have a few cassettes of older music that I wanna sample and put into something in Abelton. Why use cassettes? They’re cheap as hell, this one was $4 from my local record store.
I also made this guy:
Simple enough, getting it in game went smoothly. I don’t think I want to make this a first person model I’ll probably just attach it onto the front of one of the bots to make a kamikaze bot that rushes the player.
I also spun up a quick “use” system, which is just a ray-trace off the front of the player’s camera to whatever is in front of them. Then I made a generic “usableObject” interface which receives use commands that anything can implement.
This will be the first usable thing:
Idk what it will do but my hope is to make quick chat logs when you press use, or even a buy menu.
Been doing more with the game and got to the point where levels are starting to feel like levels and my “assetImports” folder is blowing up.
More work still needed but I’m starting to get a warehouse vibe that I think I’m going to roll with until it gets stale.
I pretty much gave up on the hit reactions for now. If I come back to that I think I’ll need to make a human-style character, then look through some unreal tutorials before I can move back to the bot.
Working on these guys made me pull in all of the knowledge of unreal and blender I’ve learned over the years. I’ll break it down into hurdles that I had to overcome.
1.) Retargeting skeletons
All of these models have their own skeletons, and each animation that you import has its own skeleton assignment. For the life of me I had no idea how to swap animations that I imported from version 1 of a model to version 2 (i.e. all of the skeleton bones had the same names). I went down a dark path of ik rigging just to realize I should’ve just right clicked a single animation.
…nuts…I had to re-import the assault rifle because I removed the stock for the bot and I made a new animation for the bot:
2.) Making the bots reload.
Again so the goal was to make the bots throw a physical object into the air so that the player would have the opportunity to shoot it. This is a crazy concept that requires a lot of tweaking to get right. The idea here is that I mark a point on the back of the bot as a “throw point” where I spawn a mag.
Then Unreal has a system called “Animation Notifies” which lets me communicate from an animation asset through an animation blueprint back to the controlled pawn.
You can see there are two here: “release Mag” which is the bot dropping a mag and “Spawn Mag” which creates and throws the new mag.
These are then tied to triggers within the animation blueprint to call back to the pawn.
Honestly not a great system but its apparently how unreal wants us to handle these things. Ideally you would have a nicer hierarchy of ownership but making what I would consider “Good” software left the room once I decided to go with blueprints.
3.) Unreal’s AI is helpful and horrible at the same time.
The tree I showed yesterday was very simplistic relative to this. When looking at this the first reaction is “oh yeah that makes sense” then “wait why would you do it this way” then finally “I don’t understand what this picture is”. So quick explanation: start at the root node, go down to the sequence node which triggers the nodes below it from left to right. The green box uptop is called a “service” and essentially is a while(true) loop that triggers while the sequence is executing. The blue boxes are if checks that happen before executing each node, for example “doIHaveAmmo” is used twice here and checks if the bot has ammo before executing. In the event that these blueboxes return false the entire sequence is canceled and you go back to the first node in the sequence(which in this case is the reload node).
4.) Niagra confuses me and I might get back into HLSL to avoid it
I made the muzzle flashes but I still kinda hate how they look. I started trying to make some 2d stuff but that ended up not really working:
The idea there was to try to make a ribbon that had a sinewave of width that would be lower frequency and a radial sinewave that would give bumps on the side. That didn’t really work however.
Then I got creative with making a mesh in blender and importing it in as the firecloud that comes out the front of the gun.
I stuck with that 2nd iteration and it seems like its better than the ribbon but I still want some more cloud-ness to it.
When firing (as seen above) it kinda looks like an orb just kinda spawned in front of your barrel. Most other games try to do this with 2d sprites however I’m a big fan of 3d sprite muzzle flashes like those from the tf2 announcement trailer:
I might just add some more spikes onto the current model and call it quits on that front.
Now next up for work is probably the environment, I have a bunch of reference images of place I thought were cool in boston and the surrounding area I’m going to try to generate some textures using them as inspiration. I then eventually have to do audio but I’m really dreading looking back into unreal’s dsp system. It’s very intense (and I literally do hardware based DSP in my day job).
Woo! Got some animations in game, I made the mistake of thinking that Unreal engine control rigs were actually something useful for what I’m doing. Once I found that out I just gritted my teeth and did the whole animation process in blender and transferred it over into unreal.
These animations will definitely need to be remade at some point but I’ll punt on that until I think I have a level of robot shooting going on.
I tried being clever and pretty much wasted 6 hours….
I spent a bunch of time rigging up this model, a large portion of the time was spent undoing the dumb modifiers I put on the legs. Specifically I used array modifiers and mirror modifiers to generate the legs after making the first one.
Using this technique ensures that when you finally go to export you need to manually re-assign origins somewhere on the model which is something you really need to keep track of when rigging properly.
This all went good but I decided to put the bones 90 degrees offset because I thought it would make it nicer to manipulate. Which I would say is true if I was going to manually manipulate the bones. However, I tried getting smart and moving into unreal engine control rigs: https://dev.epicgames.com/documentation/en-us/unreal-engine/control-rig-in-unreal-engine
Which in theory should let me animate in unreal, which would save me an export/debug step which I normally have to do from blender. However I kept hitting weird issues with setting up controls and having unreal auto-recognize things… Then I noticed the issue once I tried testing the physics asset:
So that isn’t good. Essentially unreal makes a set of simple geometric shapes that it glues onto your model when you import to try to get a good collision box setup. In this case me twisting the bones make the system give up and just surrounded the body with a capsule and legs with one big box:
So I attempted to fix this (which looked good)
…But then I hit the next problem…
What’s going on here is that there’s a parenting loop going on. Looking back at the original model I put the bones 90 degrees offset from the mesh (again because I think I’m smarter than how everyone does stuff).
With the bones offset from the mesh I had to apply the physics geometries to the mesh itself rather than the bone. So then unreal was trying to modify the position of the mesh, but the bone had no constraints to the physics so it tried putting the mesh back, then the physics manipulated the mesh, then the bone manipulated the mesh, then the physics manipulated the mesh, then the bone manipulated the mesh….So that comes out with the result you see above which is everything kind of giving up. The fix here is to re-orient the bones so I can attach the physics geometries to the bone….I basically wasted my morning setting myself up for failure. However this is the first complex mechanical mesh I’ve ever done so it’s still a learning experience in my head.