Pretty crude but I’m not going for quality atm, the detonations are very underwhelming, I can clean that up a bit later with a decent burn decal, a sphere effect to show radius and a bigger flash.
I also really want to re-work that icon to be less…..bad? The changes to the blueprint graph were pretty minimal with this one. I only had to add a couple new functions and a new ammo type. The hud is starting to annoy me so I might redo it, but I want to start building out a meta-level game so that I can get more motivation to build the mechanics. I was thinking about doing a cross country trip kinda thing then having each “arena” have a specific mechanic to it, that way the levels can be kinda inconsistent and easier to make.
Added in a small fast stabbing bot that will charge at your feet. I think there’s some collision issues that I gotta get through (you can see me getting pushed back during the gameplay above). But I think I got the hook ups nice enough that it’s not really a huge issue.
Animation is now REALLLY easy for the bots, I can pump out a quick couple keyframes in 20 minutes rather than the 2-3 hours it took before with blender.
I also updated the class structure to be an actual class structure and I can re-use code.
In addition I added some camera shaking when stuff explodes
I think there’s still a question of the directionality of the shake but right now it helps portray an explosion when off next to you. To a lesser extent I have it when you get shot but nothing crazy yet.
…I’m gonna quit on the physical hit reactions thing and just make a blendspace. There’s a hackyness to the ue5 physical animations which I seem to be stuck on and I do not want to spend much more time on it.
I’m still happy I ported all of the animations to UE5, that pipeline is much faster and cleaner than adding animations from blender. Now I can start iterating on newer enemy behaviors a bit faster.
New enemy types
After going through the physical animation gambit I added in a new enemy type. A Grenadier, right now they throw frags and flashbangs but I’ve added them into the basic survival gamemode as random spawns. The game feels a bit more like a game but the levels are so not fleshed out that it feels very vacant.
Also the bots are clipping with each other hardcore, not sure whats up there..
Here’s a few minutes of un-edited game-play (Crap quality because I haven’t messed with the wordpress upload limits yet.)
Its all still very rough, and there still no sounds for the frags hitting or detonating so you’ll just randomly die without a reason. Also I’ve hit one or two bugs that make you fly really fast across the map sooo I’ll try fixing that.
In other news I found out the way I was doing bullet hit effects was wrong so I fixed that (world context thing) now you get better hit interactions. My next goal is to clutter up the map above and fix the clipping to try to get everything feeling a bit more alive.
Uhhhgggg, I know this is a better approach because its more WYSIWYG in unreal and I’ll be doing less messing around but it’s definitely re-doing work. I have to keep believing that physical hit reactions and IK will be worth it.
Walking is still the hardest IMO, everything after that should be easy as pie (day?).
Turns out the big tower of blueprint blocks I was talking about is 100% what you’re supposed to do (meaning I see it replicated in every tutorial I see).
BEHOLD
Right after making the last picture I deleted 100% of my work for the last 3 hours…
So I recreated everything and realized I can do WAAAY less to get the same result. So the result of the redo:
Yeah… the I still have to figure out why it splits like that. My guess is that I shouldn’t have separated the bones in blender and I should re-import and see how it goes. But the IK works fine
A bit smaller blueprint graph also:
Also the IK block I used is now smart enough to handle ground better than I was doing in the first picture.
Essentially what it does is:
1.) Draw a line from the end of the leg bone to the control
2.) If there is a hit perform IK to the hit point otherwise perform IK to the control.
Now why use Control rigs? Honestly not too sure yet… My hope is that IK is easier (so I dont have to add IK bones) but adding these controls seems tedious, in addition your forward solve block I guess has to be a big tower of blueprint power?
I’m probably missing the plot here, but also my “forward solve” seems to be undoing my “backwards solve” where I assumed “backwards solve” was used for IK situations. I’m still playing around but the hope is that by using unreal internals I should be able to handle the physicality of the bots a bit better than I expect.
Which would be amazing if: 1.) Firefox supported usbhid and 2.) If I could remap the slider. But right now its really good for me using OBS to record rather than the snipping tool which records at like 20fps.
So for example here’s the control rig I described in action, BUT there’s no OBS window visible!
Otherwise I think I’m still dead-set on remaking the patrol bot animations in unreal. Walking and reload might be the most annoying but mostly I want to be able to make quick animations without the huge headache of .fbx exporting and importing (even with the blender unreal addons https://www.unrealengine.com/en-US/blog/download-our-new-blender-addons , which work amazing for static meshes, kinda sketchy for skeletal). I kinda wish unreal had the option of slicing the animation list of an FBX and attempting bone resolution before importing. I really want to get this working because then my workflow stays in unreal for animating. Blender is still blows unreal out of the water for making meshes IMO but animations in Blender still seem hacky with the action editor.
There are a few things I’m actively annoyed with when it comes to control rigs (which I wont show you here because I’m still WIP with this one.) I’m also a straight up control rig novice so I bet as I learn these problems might be solved with better practices.
1.) You cant manipulate the control shapes in the editor preview window. Seems like that would be an easy addition and should match the same kind of workflow as the “hold control to bump physics asset” thing.
2.) Control rigs are effected by bones. This one I get WHY but it seems counter-intuitive that you would ever want to make a rig that is controlled by a parent bone. I get the idea of attaching to a bone in order to have like a small sub object (for example a turret on a large ship).
3.) When you add a control to a bone it adds it to the children of that Bone. This would be fine if #2 wasn’t a thing.
4.) Adding Default forward solve assignments are not automated. I bet I could find a python script to do this but still, that blueprint tower of power really can and should be made upon generating a new control for a bone.
I have the patrol bots hooked up to throw them based upon an enumeration (Which I dont like, I’d rather a subclass but I shot myself in the foot earlier on).
I just hijacked the reload animation and basically said “if you’re a grenade bot, reload, and instead of throwing a mag up throw a flash-bang”. Pretty stupid for now but its a good proof of concept, the shakyness is because I half implemented physical animations to handle bot hits. I’m still not happy about that, I want to move to a fully unreal animation setup because my animations are so simplistic but that’s another avenue I gotta go learn.
I have tried to build a source engine mod probably 5-6 times since I started programming in late 2000s early 2010s and I’m finally ahead of the curve and I was able to get a mod built before valve did something that wasn’t in the public c++ repos that broke everything! (See https://github.com/ValveSoftware/source-sdk-2013/tree/master)
This is not an accomplishment but I’m happy it’s possible. Will I do anything with this? Probably not….
What happens when I launch the mod??
uhhggg…. I could figure this out but honestly if I’m diving into source I’d rather start with source 2 and cs2. But there’s a 10 year old part of me that longs to make a source mod, put it on moddb, start a dev blog, abandon the mod, notice another team picked up my mod, start a mod that competes against the original and fall into a deep depression when the original mod team gets hired by valve.
I’ve been playing too much xcom and I felt like my c++ skills were waning so I thought it would be a fun quick project to quickly spin up a turn based squad commander system.
There’s probably a way to make these go away…
The camera
First thing that is distinctive about xcom is that it’s an isometric game, therefore the camera is fixed above the level and travels along various levels of the map. To achieve this effect I made a quick camera pawn that does two things: Ray traces to a fixed height above the ground and constantly aligns itself to a specific pre-set orientation.
Instead of going crazy with pre-mades I just threw a camera component onto a pawn class with a few pre-set parameters.
Header code:
//CLASS BLUEPRINT EXPOSED PROPERTIES
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "2D Camera Settings")
float BaseHeightOffGround;
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "2D Camera Settings")
float CameraAlignmentRate;
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "2D Camera Settings")
float HeightCorrectionRate;
UPROPERTY(EditAnywhere, BlueprintReadWrite, Category = "2D Camera Settings")
FRotator BaseCameraRotation;
//END CLASS EXPOSED PROPERTIES
//START COMPONENTS
UPROPERTY(VisibleAnywhere, BlueprintReadOnly, Category = "PrimaryCamera", meta = (AllowPrivateAccess = "true"))
UCameraComponent* PlayerCamera;
UPROPERTY(VisibleAnywhere, BlueprintReadOnly, Category = "Collision", meta = (AllowPrivateAccess = "true"))
UCapsuleComponent* CameraCollider;
The tick function I have two lerps: 1.) for the current camera orientation and 2.) for the result of a down racytrace + a fixed offset.
//Ensure we're rotated properly to the desired world rotator that the user specified
FRotator currentRot = GetActorRotation();
FQuat newRotation = FQuat::Slerp(currentRot.Quaternion(), BaseCameraRotation.Quaternion(), CameraAlignmentRate*DeltaTime);
SetActorRotation(newRotation);
// Get the world object
UWorld* World = GetWorld();
// Check if the world exists
if (World)
{
FVector Start = GetActorLocation();
FVector End = Start + FVector::DownVector * BaseHeightOffGround*10.0f;
FHitResult Hit;
FCollisionQueryParams QueryParams;
QueryParams.AddIgnoredActor(this);
bool bHit = World->LineTraceSingleByChannel(
Hit,
Start,
End,
ECC_Visibility,
QueryParams
);
if (bHit)
{
FVector newWorldPos = FMath::Lerp(Start, Hit.Location + FVector(0,0,BaseHeightOffGround), HeightCorrectionRate*DeltaTime);
DrawDebugLine(
GetWorld(),
Start,
Hit.Location,
FColor(255, 0, 0),
false, -1, 0,
12.333
);
SetActorLocation(newWorldPos);
}
}
THE RESULTS!
Note: the red line is just for my debugging purposes. Also the camera will not be visible in game.
The DSP engineer in me is balking at the casual use of a delta time and a linear interpolation. But, we’re on a PC running at low data rates (60hz!?!? pfffffff I could get this running at 24khz) and variable frame sizes so while I keep thinking “I could optimize this to be less MIPS” I think I’ll just press on…
Commanding/ Game structure
In xcom the turns are play out that you select a squad member, choose their action, move to the next squad member, choose their action, until you have no more squad members, then your turn is over.
Now there’s a few ways I could implement the squad mechanics. Specifically the way that each squad member gets controlled and how the player controller interacts with them. I could have the user’s controller re-posses each pawn upon selecting them (which might save memory but increase the work upon possession). However, I would still need to perform path finding to move the character to a specific location. Instead I think the way I want to play it is to have every squad member owned by an AI controller that receives broadcasts from the player controlled camera pawn. Below is my rough diagram (I think there’s a gamemode state in there that I need to throw in but generally I think this is fine)
www.drawio.com FYI
In addition to handling the general flow of the game I think the other side is that this sets the game up for multiplayer from the start.
The command/game state messages I think I’ll probably make delegates in the game mode that the controllers can subscribe to. Then the deaths of squad members and selections I think I’ll push to to the controllers? My thought here is:
I’ll probably jump back on this tomorrow and start coding up the events and perform in a test or two.
I started messing with Gadot (https://godotengine.org/) to fulfill the free-form open source activist in me (also I have a friend making stuff in Gadot).
So I started messing with the 3d side but the goal is for a more metroidvania type game. So I moved into understanding the 2d asset pipelines a bit better. The way I want to handle this is to:
Make a spritesheet(s) with four different animations: Idle, walk, jump, land
Make a character using the sprite-sheet with simple controls
Stretch goal- Add landing and walking smoke effects
Making spritesheets has gotten so easily since I was making simple games in XNA (15ish years ago damn…) now you can do everything in browser. I’m using https://www.piskelapp.com which is very fast and reliable for dumping a .png for a sprite sheet.
I’m going with a stick figure. I’m cool with jank walking but as long as you get four frames I you’re good (https://en.wikipedia.org/wiki/Walk_cycle). My finalized sprite sheet:
First three two frames are idle (just a simple bob). The next four are the jump/land cycle, the rest are walking (which I can invert for each direction in gadot).
Gadot has a REALLY good sprite sheet importer:
It lets you chop, then select which frames to import. In that way you can keep 100% of the sprites in one huge sprite-sheet and not worry about flipping between files at runtime (which in reality isn’t a big issue but I imagine you’re going to hit performance on low ram platforms with 1000+ pngs loaded in memory).
The animation editor once you get frames in place is SUPER easy and lets you quickly edit and test spritesheets as needed:
All of these get stored inside of your “AnimatedSprite2d” instance that can be selected via sprite frames property:
Now the scripting in gadot mirrors unity (and unreal kinda) where everything has a setup and loop function that you edit to handle internal logic to your “Nodes” (which are called actor and gameobject in unreal and unity respectively). Gadot doesn’t have a nice state machine editor like unity or unreal so you need to program it up yourself. The design of the simple character is actually expecting you to do this so there’s there’s state and utility funcitons in both the character and sprite classes that let you do this easily:
extends CharacterBody2D
const SPEED = 300.0
const JUMP_VELOCITY = -400.0
@onready var _animated_sprite = $AnimatedSprite2D
#AnimationStateMachine
var startJump = false
var isFalling = false
var endJump = false
var isWalking = false
func _physics_process(delta: float) -> void:
# Add the gravity.
if not is_on_floor():
velocity += get_gravity() * delta
startJump = false
isFalling = true
elif isFalling and is_on_floor():
endJump = true
isFalling = false
elif endJump and is_on_floor():
endJump = false
else:
endJump = false
isFalling = false
# Handle jump.
if Input.is_action_just_pressed("ui_accept") and is_on_floor():
velocity.y = JUMP_VELOCITY
startJump = true
# Get the input direction and handle the movement/deceleration.
# As good practice, you should replace UI actions with custom gameplay actions.
var direction := Input.get_axis("ui_left", "ui_right")
if direction:
velocity.x = direction * SPEED
if(absf(velocity.x) > 0):
isWalking = true
else:
isWalking = false
else:
velocity.x = move_toward(velocity.x, 0, SPEED)
if(absf(velocity.x) > 0):
isWalking = true
else:
isWalking = false
move_and_slide()
handleAnimationStateMachine()
func handleAnimationStateMachine():
if startJump:
_animated_sprite.play("Jump_start")
elif isFalling and _animated_sprite.animation_finished:
_animated_sprite.play("Jump_loop")
startJump = false
elif endJump and _animated_sprite.animation_finished:
_animated_sprite.play("Jump_end")
elif isWalking and _animated_sprite.animation_finished:
_animated_sprite.play("Walk")
else:
_animated_sprite.play("default")
endJump = false
isWalking = false
If you have no idea what this is doing essentially its:
Final result:
So I’m pretty confident here that if someone threw me a bunch of sprites/2d art I could go ahead and make a game. Gadot has the same issues as unity where the structure is much less defined than Unreal. Therefore its crazy easy to prototype but scaling up will take more developer discipline to prevent Node spanning bugs and state issues. In my head the rough hierarchical structure of any 2d sidescroller would be:
The way to read this is that the controller points to the controlled. So the game master controls everything the scene master controls characters and cut-scenes etc. With this structure way you don’t get confused with race conditions etc. If you’re a lower level object that wants to initiate a high level event you’ll be forced to request up the chain (i.e. an npc that the player presses use on will send a request to the scene controller to start a cut-scene, which will then send a request to the game controller which then can trigger the cut-scene player). It seems confusing and over-engineered but if you consider the alternative you would need everyone to align to some other kind of mental model, which in my opinion can be crazy painful to handle and leads to a BUNCH of crunch.