I updated my AI/ML post to have the github repo here:
https://github.com/wfkolb/ml_signal_detector/tree/main

Aiming to do more work on this sometime next week.
I updated my AI/ML post to have the github repo here:
https://github.com/wfkolb/ml_signal_detector/tree/main
Aiming to do more work on this sometime next week.
I wanted to spend 30ish minutes making the alert light I talked about last post before work: here’s where I got:
At the end there the direction of the light is all wrong. Probably going to redo 100% of this later today (as it kinda looks bad anyways and a part of me wants the spinner in an animation).
It looks weird, I think its just the colors? And the proportions? And the shape?
It takes a bit to see but essentially what happened there was:
This works by a hierarchy of multicast delegates (https://dev.epicgames.com/documentation/en-us/unreal-engine/multicast-delegates-in-unreal-engine) that passes the players known location from bot -> assembler -> game mode then then game mode alerts all of the assemblers and then alerts each bot.
In other news I did a bunch of backend work that isn’t pretty to look at: essentially I swapped out all of the old classes and re parented them to a generic “BlackLaceGameMode”:
This lets me make generic functions and I can pass some of the capability from the survival game mode into this new game mode (which I’m calling control).
Also performance seems to take a hit after around 20-30 bots in the world at once:
This isn’t the end of the world here (I can move a bunch of stuff into c++ and memory optimize it if I need to) but I think I shouldn’t be this bad with just a bunch of skeletal meshes. I bet if I started doing some occlusion culling (https://dev.epicgames.com/documentation/en-us/unreal-engine/visibility-and-occlusion-culling-in-unreal-engine) I would be able to get everything up to a very high fps. But I’ll punt performance optimization till after I get a game working. Next up is messing around with the terminal so it actually does something.
I think the sequence I’m thinking the player accomplishes to “hack” the system.
1.) The player clicks “transfer group id”
2.) There is then a password prompt that the player can guess or they can click “forgot password”
3.) Then a bunch of trivia pops up that the player needs to answer
4.) Then a new password is flashed across the screen
5.) The player enters the new password
6.) The player takes over the connected assemblers by pressing a button on the screen.
After #1 I think I want the turrets I made before to pop up out of the assembler and start shooting at the player, along with a bunch of red light spinners:
Another thing I’ve been diligent about is trying to make sure I don’t do anything dumb to preclude multiplayer in this game. I think getting multiplayer shipped is another animal but getting lan or local games working shouldn’t be too bad.
So like nothing is straight up crashing (which is good, also might not be too possible because I’m straight up wearing water wings by using blueprints), but you can see the bots aren’t persisted, the weapon selection isn’t persisted etc. I can poke around on getting that all working but again I think it will take away from the base “game” part of it. Definitely if I get it on steam then I think the first thing I’d try to attack is multiplayer. I’ve done that before in another game, you essentially gotta mark stuff as “replicated” or “non replicated” and if you mess it up you’re sending or recieving way to much data, oorr you’re giving control to someone who shouldn’t have it (such as giving a client the authority over where a player is located). You can also pull in achievements/scoreboards and server stuff but it required converting the game to full c++ then building in libraries that unreal cannot include normally (such as steamworks libs). So yeah, fun experiment but not doing that now.
I also added an “ai/ml” category for the site (it’s the lil brainy):
And this still seems like scrapers or bots:
I’ll get around to checking at some point.
I decided to bite the bullet and startup learning pytorch(https://pytorch.org/) and basic neural net creation (Also this was a reason I dropped $2000ish on a 4090 a year ago and with the promise I could use it to do things like this). After the basic letter recognition tutorial (here) I decided I wanted to make a 1-d signal recognition tutorial.
Here’s the plan:
From this I can surmise I would need a few python scripts for generating test data, training the model, defining the model and some kind of simulator to test out the model. Ideally with a plot running in real time for a cool video.
Quick aside: If you see “Tensor” it isn’t a “Tensor (https://en.wikipedia.org/wiki/Tensor)” It seems more like a grammar replacement for a multi-dimensional matrix? To my knowledge there’s no size or value checks for linearity when you convert a numpy array to a tensor in pytorch so I’m guessing there’s no guarantee of linearity? (or maybe its because every element is a real constant I pass the checks every time?)
The signal I chose as my test signal was two Gaussian curves across 768 samples. Making this with numpy (which is much, much worse than matlab by the way, why isn’t there an rms function??? https://github.com/numpy/numpy/issues/15940 ).
Essentially its 10ish lines of code
def ideal_signal():
return_size = 784
x = np.arange(return_size)
# Gaussian parameters
mu1, sigma1 = 196, 10 # center=100, std=10
mu2, sigma2 = 588, 20 # center=200, std=20
# Create two Gaussian curves
gauss1 = np.exp(-0.5 * ((x - mu1) / sigma1) ** 2)
gauss2 = np.exp(-0.5 * ((x - mu2) / sigma2) ** 2)
# Combine them into one vector
vector = gauss1 + gauss2
return vector/vector.max()
Easy Enough now onto harder things
So making the network is obviously the most tricky part here. You can easily mess up if you don’t understand the I/O of the network. Also choosing the network complexity seems to currently be some kind of hidden magic that is wielded by PHDs. In my case I had the following working for me:
So this makes my life a bit easier in selection. The first layer of the neural network will be sized of 768, then a hidden layer of 128 then an output of 2. Why 128 a hidden layer of 128? I have no idea, this is where I’m lacking knowledge wise, my original though was that the “features” that I was looking for would fit into a 128 sample window but as I’ve progressed I realized that is a poor assumption to choose your hidden layer. My guess is that I’m doing too much (math/processing wise) but this assumption that I made upfront seems to have worked for me.
So this is the bulk of the work here. The idea was to generate a bunch of signals that can be used to represent the span of real world signals a detector such as this would see. Therefore you have pretty much three parameters to mess with, signal delay (or number of samples to shift), signal SNR, and system noise floor. Technically you do not need system noise floor, I have it in there as a parameter anyways but I wanted to facilitate customization of any scripts I made for later. In my case I kept system noise floor constant at -30dB.
Essentially the pseudo code for generating signals is
1.) Take the ideal signal made above
2.) Apply a sample delay (positive or negative) to a limit so we don’t loose the two Gaussian peaks
3.) Apply noise at a random SNR within a bounded limit (in my case it was -10dB to 10dB)
4.) Threshold the signal to be a positive when the SNR I applied to the noise is above where I expect a detection.
The size of the training data, again I had no idea what I was doing here so I just guess 50000 signals to shove into the neural net? In reality this was a guess and check process to see if I under-trained/over-trained as I tested the model (I’m writing this after getting a working model which is uncommon for most of these posts). Also the training quality is also arbitrary here, I just kept running #’s until things worked: Kind of lame I admit but I have a 4090 and training takes 15 seconds so I have that luxury here.
To build out the simulator I wanted I basically took a bunch of the generation code from the training data and threw it into a common functions wrapper for re-use. My end artifact was to make a simple pyqt (https://wiki.python.org/moin/PyQt) app that had buttons for starting and starting a simulation and sliders for sample offset and for signal SNR. Then I would make an indicator if the neural net detected the signal or not. The only difficulty with this is mostly just dealing with async programming (which is a difficulty with all AI). My solution here was to have the main thread run the QT gui, then spawn a background thread to do AI work. The main thread would then generate the data and do a thread safe send (using pyqtsignal) to the background thread with the data after plotting. The background thread then processes the incoming data using the GPU and will send a positive or negative signal back to the main app to change the line color green for detected and red for not detected.
(Top slider is sample offset, bottom slider is SNR)
The plot above is locked to 20 fps and all of the calls to the neural net on the gpu return well before the frame is finished drawing. Its pretty surprising this worked out as well as it did. However there are definitely issues with hard coupling of the signal being centered. I honestly still do NOT thing this is anywhere near what you would want for any critical systems (Black boxing a bunch of math in the middle seems like a bad idea). However for a quick analysis tools I can see this being useful if packaged in a manner that was for non-dsp knowledgeable engineers.
I did many more iterations I didn’t write about here. I had issues with model sizing, training data types, implementing a dataset compatible with pytorch, wierd plotting artifacts etc.
Future work I want to do in this space:
My Neural Net | Full Convolution With Matched Filter | |
Multiplies | 98560 | 589824 |
Additions | 98430 | 589056 |
So on paper I guess this is “Technically” less work on the PC than a perfected matched filter response (i.e. auto-correlation). However, I think the problem here is that using a neural net to do something this simple is probably much more processing intensive than just making an filter that pulls out the content you want. But that being said if you had a high pressure situation and needed to do simple signal detection and just happened to have a free NPU in your system this could work.
The number of TOPs (see https://www.qualcomm.com/news/onq/2024/04/a-guide-to-ai-tops-and-npu-performance-metrics) that my neural net uses is ≈ 98560/10^12 = 9.856e-8 TOPs which is INCREDIBLY SMALL for most NPUs so most likeley I could operate this in any real time configuration (even on the cheapest possible qualcomm npu here: https://en.wikipedia.org/wiki/Qualcomm_Hexagon which as around 3 TOPS, to give a better comparison my 4090 runs 1300ish TOPS).
I want to still get better at building these so I think my next step in will be more tuned towards analysis of several signals, then trying to combine them and bin them into each category (pretty much like the pytorch tutorial but more deliberate). Also get the code on github… Code is on github here: https://github.com/wfkolb/ml_signal_detector/tree/main
I had WAAAY more weird glitch effects on this one but I pulled it back. The snare still sounds too loud imo.
Finished up the health side. The injection sound I wanted to sound like a spray but it dosent sound great right now. I’ll mess with that later on.
Added a syrette (syringe) bar to the hud and added a quick effect for pressing “V” which I’ll make the heal key.
I think that combined with a little “psssh” sound will be enough to tell the user they healed.
Spent some time building this guy. What I’m going for is a quick box that holds and dispenses syringes. The glass on the right is supposed to be translucent (but that will be kind hard to see until I get it into unreal).
Putting “First Aid” seems kind on the nose (no cool game design “implication symbols” or whatnot) but I’m still in rough draft mode here.
Next thing todo is to get a non-world geometry version that is a skeletal mesh that is in the players POV to play an injection animation (But honestly I might skip that in favor of just making the screen go green or something).
The heart is a Unicode ( ♥♥♥) and the “First Aid” Text are both just stencils I made up in GIMP
So I was looking back at the last thing that I made for planning
and I already see the inherent problem is that I never made a “Game” sequence in addition to the boss building. In that vein what I was thinking is:
This seems simple enough but there’s a few mechanics I do not have written yet.
This was pretty easy, I just updated the EQS query to be set around a specific location rather than the location of the bot. This changes the scores so the bots wander around a fixed point and don’t leave its radius until its engaged.
I also had to update the Blackboard to hold this value, so I can swap it on the fly
Now to update the guard point for a non-deployed bot I just set the “IsGuarding” flag to true and the “GuardPoint” to wherever.
I think I want the health system to be a set of stims rather than recovering health or anything else. So I started modeling up a health station in blender
and syrettes/injectors/needles/syringes are kinda easy (I did this in 5ish minutes, still need a touch up).
My hope is to play an animation before giving health, essentially I want the player to show the syrette then pull below the frame and inject. This should save me animation work (hopefully).
I got the initial game mode made and the spawn but I also made a data table which holds difficulty settings
The idea is that each assembler will have a number of randomly generated guard bots which will patrol around the assembler and a set list of “wandering” bots that will be assembled and run towards the last place the player has been seen.
Right now I have the gamemode start working I just need to get the player alerts working. Which In my head I think I want each assembler to control their fleet of bots so the chain will be:
A bot sees the player and alerts their assembler with a location > The Assembler Then alerts the game mode > The game mode gets the new position and sends it to the other assemblers.
This way I can also setup alert radius’s, a limit of reporting etc.