“Building in progress”

(Ignore the messed up ragdoll)

Sidebar:
If you attempt to pull a widget ui on construction rather than beginplay you’ll get a pointer that is valid but isn’t aligned with the actual game pointer. My guess is that unreal setups UI placeholders in memory before begin play (Probably why they mark most UI elements as “DONT USE THIS FOR GAMEPLAY”)

Posted for the cool, cool header image

“It’s not blurry its a design choice…”

This is the UI for the terminal. Next step is to hook up up to the game, I’m thinking the main difficulty there is that I’ll have to rework the assemblers to be controlled by a terminal rather than the game mode. In addition adding callbacks to throw what is build built back to the terminal, finally I’ll need to add a teamID to the player (which is a simple If statement so that shouldn’t be terrible). Also I kinda hate that the edge around the trackball is the same color as everything else…but game before polish needs to be my mantra for the next few weeks.

Terminal Time

A while back I made a terminal model that looked meh but was designed to serve as a stand in for some kind of computer interaction:

I started out wanted to make a command line interface, but then I changed my mind and wanted a Doom 3 style interface:

Example was pulled from here: https://www.reddit.com/r/Unity3D/comments/4nkvj9/ingame_gui_like_doom_3/

Setting up the actors

To accomplish this I found that unreal has a whole system for defining this procedure called a “widget interaction component” (see https://dev.epicgames.com/documentation/en-us/unreal-engine/umg-widget-interaction-components-in-unreal-engine) which is great! However if you’re not only trying to push buttons its a real pain to get everything setup to map properly.

The component works by doing a ray trace every frame so many meters in front of you, if that trace hits a widget component (the container class for UI components, see: https://dev.epicgames.com/documentation/en-us/unreal-engine/building-your-ui-in-unreal-engine) then the widget component will receive events as if the widget component was the 2d UI that is used for basic UI development. In other-words: Unreal makes a virtual screen, then I add the “virtual screen user” component to the player character and then it should be the same UI development process I would use for menus, huds etc.

the “virtual screen user class”

To show the UI in game I made a quick actor that held my terminal and my UI widget

My initial UI widget is just a white box cursor and a translucent green background

So ideally I would just use my Widget Interaction UI and do 100% of the development on the widget side. HOWEVER, I found out there isn’t a good “get What the Widget thinks the cursor position is” within the widget (The widget that holds the UI design also has its own execution path if you didn’t know that). But the issue came from that the only way to get the cursor position was the ACTUAL cursor position, which wont work here because it will always be centered on the screen during gameplay.

Sooooo, to fix this I had to make a “Set cursor position” public function that can update the location of the white square:

Then I had to add yet ANOTHER tick function to my main character to handle terminals:

So here’s the order of events:

1.) The widget interaction component detects its hitting a widget in the world here:

I then grab that widget and hold it in memory

2.) The tick function executes on the next tick

Which sets the position of the white square.

The result:

Seems decent enough, I just gotta mess with the sizing a bit more and disable debug to start really working at it.

Making a simple UI

To make a quick cursor (if you ever need this information)

1.) Select all

2.) Scale down to 1/4 of the screen

3) Fill black

4.) Rotate your selection 45 degrees

5.) Remove the corner

6.) undo that because that would never work

7.) Invert your selection and fill white

8.) Undo that because that makes no sense

9.) Select all

10.) Shrink by 50

11.) Rotate your selection 45

12.) Undo that and select all again

13.) Shrink by 25

14.) Rotate by 45

15.) Erase everything in your selection

16.) Select all

17.) Shrink by 50

18.) Scale horizontal by 50, increase vertical to 125

19.) Rotate by 45

20.) Undo and rotate by -45

21.) Be sad that you didn’t scale large enough on step 18

22.) Undo the last 4 steps and increase vertical size to 150

23.) Do steps 19-22 again and resize to 175

24.) Select all black, and realize you have weird artifacts from rotating

25.) Mess with the levels to remove all of the artifacts

25.) Make a boarder around the black selection and fill white

26.) Invert the colors because you realize most cursors are white not black

27.) Gaussian blur to make sure no-one can see your horrible brush strokes

28.) profit?

Now with that squared I can imported everything into unreal and made a quick ui with a few buttons:

Then I threw in some logic to make the weapon lower when you’re looking at a terminal, and that the “fire”event will left click the simulated UI:


The final result:

Still not sure why the cursor is so blurry when moving around a bunch, probably because of some of the motion blur I have on. But generally the goal is to have one of these on each assembler and I should be able to use it as a capture point kinda process to gain control over the assembler to create minions to go capture more assemblers etc. Which I think is now my basic “game loop”


So now I think I know what the game is, I’m thinking one map, 4-5 assemblers, one “Hub” boss area, I’ll probably package in my survival game mode because it’s kinda already done. Essentially this will be a single player MOBA against an over-world AI

So that means I need:

1.) An overview map UI element

2.) Region based spawning for the assemblers

3.) A “Boss hub” whatever that means

4.) Some kind of stationary turrets like which will essentially be a moba

5.) Some kind of countdown that adds an element of urgency to the player

So I added 5 tasks while removing one…not a great ratio (see https://trello.com/b/dmIooAod/blacklaceworking-board ) but knowledge is progress.

Oil and Strings take memory

Also for some reason my output log is taking up 8 gigs of memory.

I admit I had a divide by zero error but why did it fill up the output log to the point of nearly crashing my computer…

Other news, I reworked the bot deaths so they spray oil everywhere:

This is a oil material I made in gimp, then randomly subdivide and throw on walls. The oil material I made from an awesome blood splatter tutorial here: https://www.gimpusers.com/tutorials/blood-splatter-texture

Which long story short, it’s just the “sparks” brush grey-scaled. But still….

I also did another pass on the AI to 1.) stop it from following you while it was reloading and 2.) To keep it moving during a fight.

#2 was done by adding a list of positions where the bot received damage last

Then I made an eqs query context to basically make the bot shy away from where it was shot before.

To fix #1 it was just a weird quirk of ai services objects. The way I’m looking at the player is via a service (Called PatrolBot2FaceTargetService), which runs every 0.5 seconds that sets a “focus” variable for the ai controller. This essentially just turns the controller to the object it wants to “focus” on.

The issue was that that green guy was running after it stopped going down the “findFiringPosition” leg. So you never really cleared out the focus on the character. So a quick fix is when the bot reloads it drops all focus

Here’s the results of the two:

Honestly I think its already helping out with making fights a bit more dynamic.

I also added bot teams so I can start doing stuff like this:

I want to somehow get the eqs setup so it is a bit more random, you can see both of the top two bots moving upwards to find the next “good” position, which is scored more from where they’re at. I think I’ll need to do some kind of radius check? Not sure yet.

There still other issues to sort out also…

HUD Rework and Physics noodlin’

The old hud was starting to bother me so I leaned more into a simplier hud that I can expand upon later.

The original hud I was going for a kind of skewed look as if you were wearing a helmet and the information was displayed on these green dots on the screen. While a cool concept I think it was too janky and looked like mspaint levels of cleanliness. Also you can see here if I have a non-standar resolution the alignment would always be misaligned to the screen. The new hud (which I would say is still a developer hud) I went for a cleaner approach:

Nothing too crazy just a health bar and some text on the corners. But the best part:

The positioning stays consistent regardless of dpi/resolution.

I achieved this by avoiding the “scaleBoxes” I was using before and just used horizontal/vertical boxes with spacers in between:

I also adjusted my dpi scaling rules:

I also found out I had a bad strand in my blueprint spaghetti from months ago:

That circled red block is what determined the direction of damage when a bullet was fired. I was doing StartVector – EndVector instead of doing EndVector – StartVector which was inverting the direction of damage as the “direction Of Damage” was relative to the attacked instead of the attacker.

Why does this matter? Unreal has a damage type system which makes it easier to set impulses from different types of damage. I have a bullet type that I wanted to push objects, when the vectors are swapped:

When the math is correct:

(I realize the window is awkwardly small in these videos…)

I also wanted to touch up bot “death” (destruction? Disassembly?) and I saw I never made a physics asset for the Assault rifle with no stock, after that fix plus the impulse fix things start to look much nicer.

The legs spawning detached is still a bit annoying. That issues comes from the physics asset which I spent so much time with a month or two ago I eventually gave up and kept it a “this just works so don’t touch it” kinda stasis.

If you look at the tree to the left you can see there’s a “Body” bone which holds the main collision box then there’s “BackLegXX” bones which I have attached to the body bone and to the bottom legs. However, there’s nothing in between for the thighs of the bot, which is why in the video above you see the legs swinging free. I’d need to remake this guy by going through each leg and adding a box, connecting it to the body or previous leg and ensuring everything doesn’t collide with itself. There’s an “auto-generation” feature which I’ve had little success with so this will end up just being a big time sink.

Eventually I want bots to have a chance to detonate in a electrical fireshow which I think would add to the dynamic nature of fights. Also I need to do a re-write of the AI to add:
– AI Teams
– Team Positioning Coordination (so the bots don’t just all choose the same spot)
– Goal Zones (for some other ideas I’m working to add a meta level to gameplay)

Flute it up

I went into this then I found this bass sound and just kinda rolled with it I found a sound a liked and rolled with it (it a bit loud fyi)

Here’s the synths (where the concert flute is a sampler)

The flute is surprisingly nice is from the Abelton orchestral woodwinds pack: https://www.ableton.com/en/packs/orchestral-woodwinds/. That being said: a part of me thinks I shouldn’t have used the flutes, it kinda takes a dance thing and un-dances it.

I started working on the HUD also but I’m failing at keeping the scaling consistent:

I did put shells on the Assault Rifle though:

Not really a huge accomplishment but baby steps I guess?

Loop-dee-Loop and Todo-s

I wanted to mesh paint (https://dev.epicgames.com/documentation/en-us/unreal-engine/getting-started-with-mesh-texture-color-painting-in-unreal-engine) a bit in unreal and thought “Oh I’ll just flip this virutal texture box”

I have a pretty good pc (4090 24gb) I’m surprised its taking so 5+ minutes (but then again I’ve made NO effort to optimize any of my materials…

That being said…

I guess I made things low poly enough that maybe this is inherently optimized….

But thats a detail thing that I can work on later, For the outside I want to:

  1. Make those back blocks look like a office building embedded into the rock
  2. Replace those generators with something that would flow better with game play
  3. Make a new texture for that platform. Tat texture is fine but I’m using it in like 12 places and I don’t want to mess with UV scaling just to get it working
  4. Make some static rock models to place along the edges of mountainous regions so I it doesn’t look like a muddy ski slope
  5. Do something better with these conveyor belts. I put in WAAY to much time to make those kinda work, I might just delete them
  6. I have this cabinet which is kinda sketchy looking, it was supposed to go with a terminal but I might move it again
  7. Fix that lighting so that there isn’t a weird skylight for a warehouse setting
  8. Get some friends for that tree.

Also I need to make a better road texture, at far distances it looks fine but then you zoom in…

Every-time I look at it my graphics card fan spins up…probably a bad thing.

I also added a tunnel model

The UV’s and normals are REALLY messed up atm but I dont want to dig in yet. I fixed it by making the underlying material double sided but generally if you’re a single sided material and your model goes from looking like this in blender:

To this…

You should probably do it again….

But in the meantime there’s the magic “Make the material two sided” button which is normally used for meshes that have both an interior and exterior (which is not this….at all)

Still has issues. The darkness underneath comes from the unreal seeing that the normal of the face is facing in a direction that is into the block’s origin rather than away from it. This makes the lighting calculations get all wonky.

I’ve been disorganized when working this (also I stopped working it for like 2 weeks and now I’m lost) So I made a trello board here:
https://trello.com/b/dmIooAod/blacklaceworking-board

Broken into:
“Longer term goals”- Things I probably wont look into until I have something on steam

“MVP Make/Design”- Things I want to add to the game before I submit a demo/game to steamworks

“Fix/Improve” – Things that are in the game but need to be re-worked.

I’ll start chipping away at these and my personal goal is to get something ready to throw on steam by June.

More Cyber

ChatGPT is my friend.

This is in a branch:
https://github.com/wfkolb/CyberpunkRedCyberspace/tree/wk_feature/NewPlayerVisualization

Isn’t hooked up to firebase yet (to pull the floor information) but I figure I can make a few models to handle each level and a bunch for each of the ice daemons (you can see the dog for the hellhound: https://cyberpunk.fandom.com/wiki/Hellhound_(RED) ) which I’ll probably go through the list and make models for each. Outside of wireframe the model is…rough:

Vibin’ for some cyberpunk

Did a bit more work on the cyberpunk stuff

Functionally I think I’m close. I’m probably not going to add branching paths to the floors but I’d rather get this locked down then I can transition the viewing to a better 3d model view (with elevator and all).

Code is posted here:

https://github.com/wfkolb/CyberpunkRedCyberspace

Again this is mostly chatgpt but I got into the point where you can’t really expect chatgpt to do 100% of the work, but it is still like asking someone else for help. ChatGPT can’t read your brain and they can’t know 100% of what you’re seeing and what’s on your computer so you need to modularize as much as you can. In this case I setup the website to be essentially a few javascript classes that all dump a <div> that I add to the baseline html page. The html page has a connection to firebase (https://firebase.google.com/ which has gotten MUCH bigger since I last used it in 2014, ironically I had a “senior app developer” tell me using firebase was “unprofessional” now it seems like its the backend to a bunch of professional tools). The Main page handles the pull/push from the server and I have two “room display” functions that display the rooms differently based upon if you’re a gm or a player.

I have my todo list here: https://github.com/users/wfkolb/projects/2

I really need to finish the starting environment for blacklace but I had a two week gap that I wanted to setup a cyberpunk Red game for…so I’m here.

More Vibe coding

I wanted to setup a way to visualize the cyberspace for cyberpunk red ( https://www.cbr.com/cyberpunk-red-netrunner-class-explained/ ) for awhile and I never found a good solution. So I tried spinning up my own version with some chatgpt help:

Looks like crap but if I get a simple viewer working it should be easy enough to get everything in threes.js to make a better visualization for personal games. I’m hoping to get a retro terminal vibe going with this one kinda like my other visualization here: https://willkolb.com/?p=566

Essentially all this does right now is update a firebase database with some information that the person in the “GM” url can edit and the person in the “player” URL cannot edit. So nothing groundbreaking but I’m starting to find the weak spots with pure AI programming that was kind of annoying. The big one is the context saving/switching for conversations isn’t great, but I think that’s also driven because I’m using chatgpt free tier.

Also because this is just an html file and I’m the only person contributing I can just throw it into github! (Unreal projects are too big for free github).

https://github.com/wfkolb/CyberpunkRedCyberspace