Isn’t hooked up to firebase yet (to pull the floor information) but I figure I can make a few models to handle each level and a bunch for each of the ice daemons (you can see the dog for the hellhound: https://cyberpunk.fandom.com/wiki/Hellhound_(RED) ) which I’ll probably go through the list and make models for each. Outside of wireframe the model is…rough:
Functionally I think I’m close. I’m probably not going to add branching paths to the floors but I’d rather get this locked down then I can transition the viewing to a better 3d model view (with elevator and all).
Again this is mostly chatgpt but I got into the point where you can’t really expect chatgpt to do 100% of the work, but it is still like asking someone else for help. ChatGPT can’t read your brain and they can’t know 100% of what you’re seeing and what’s on your computer so you need to modularize as much as you can. In this case I setup the website to be essentially a few javascript classes that all dump a <div> that I add to the baseline html page. The html page has a connection to firebase (https://firebase.google.com/ which has gotten MUCH bigger since I last used it in 2014, ironically I had a “senior app developer” tell me using firebase was “unprofessional” now it seems like its the backend to a bunch of professional tools). The Main page handles the pull/push from the server and I have two “room display” functions that display the rooms differently based upon if you’re a gm or a player.
I wanted to setup a way to visualize the cyberspace for cyberpunk red ( https://www.cbr.com/cyberpunk-red-netrunner-class-explained/ ) for awhile and I never found a good solution. So I tried spinning up my own version with some chatgpt help:
Looks like crap but if I get a simple viewer working it should be easy enough to get everything in threes.js to make a better visualization for personal games. I’m hoping to get a retro terminal vibe going with this one kinda like my other visualization here: https://willkolb.com/?p=566
Essentially all this does right now is update a firebase database with some information that the person in the “GM” url can edit and the person in the “player” URL cannot edit. So nothing groundbreaking but I’m starting to find the weak spots with pure AI programming that was kind of annoying. The big one is the context saving/switching for conversations isn’t great, but I think that’s also driven because I’m using chatgpt free tier.
Also because this is just an html file and I’m the only person contributing I can just throw it into github! (Unreal projects are too big for free github).
I spent the day listening to old gorillaz youtube videos while working. Once I hopped on abelton I tried re-creating “welcome to the world of the plastic beach” by memory:
It turned out better than I thought after I compared it to the original. (You may notice the beeps from rhinestone Eyes I somehow got mixed up in my head and added to this song).
The hardest part here was the voccoder which I rarley play with and is kinda annoying to get right. So for my future self here’s the voccoder settings:
Here’s the operator that’s driving it
And here’s what’s being played:
Sounds pretty good, I think I could touch up the input noises more and if I got a better brass sound I think it would work better.
Otherwise I spent some time “vibe coding” to touch up the site. Vibe coding is just asking chatgpt to do stuff for you. I can still see why you would need an engineer to do this so I wont say its totally a replacement but damn did it take a 5 hour task and make it like 20 minutes.
(You can only see the above if you’re using a PC on the website) Both the top section and the bottom are custom shortcodes added into the site using by asking chatgpt a bunch of stuff. Here’s the PHP for the category stuff: This also requires a plugin, chatgpt suggested one but I found an older plugin that was open source and asked chatgpt to change the initial code that it gave me to that. It’s pretty crazy how simply that quick reconfiguration was.
//////
//Category Icon List
/////
function shortcode_category_icons() {
if ( ! function_exists('get_term_icon_url') ) {
return '<p><em>Icon Categories plugin not active or function missing.</em></p>';
}
$output = '<ul class="category-icons">';
$categories = get_categories([
'hide_empty' => false, // Show all categories, including empty ones
]);
foreach ( $categories as $category ) {
$icon_url = get_term_icon_url( $category->term_id, 'category' ); // get the icon URL
if ( $icon_url ) {
$output .= '<li>
<a href="' . esc_url( get_category_link( $category->term_id ) ) . '" title="' . esc_attr( $category->name ) . '">
<img src="' . esc_url( $icon_url ) . '" alt="' . esc_attr( $category->name ) . '" class="category-icon-img"/>
</a>
</li>';
}
}
$output .= '</ul>';
return $output;
}
add_shortcode( 'category_icons', 'shortcode_category_icons' );
Also under the title page
That little line above I spent atleast 3 hours trying to get working before I started embracing AI. Chatgpt understood the problem and gave me a solution in seconds. Essentially it was adding a <div> to the “header.php”
Then some java script using another plugin that chatgpt suggested using “WP Headers And Footers” It was so easy that I imagine that it probably isn’t worth blogging too much about. But I see myself getting more into the ai dev side as I go along.
Now why use Control rigs? Honestly not too sure yet… My hope is that IK is easier (so I dont have to add IK bones) but adding these controls seems tedious, in addition your forward solve block I guess has to be a big tower of blueprint power?
I’m probably missing the plot here, but also my “forward solve” seems to be undoing my “backwards solve” where I assumed “backwards solve” was used for IK situations. I’m still playing around but the hope is that by using unreal internals I should be able to handle the physicality of the bots a bit better than I expect.
Which would be amazing if: 1.) Firefox supported usbhid and 2.) If I could remap the slider. But right now its really good for me using OBS to record rather than the snipping tool which records at like 20fps.
So for example here’s the control rig I described in action, BUT there’s no OBS window visible!
Otherwise I think I’m still dead-set on remaking the patrol bot animations in unreal. Walking and reload might be the most annoying but mostly I want to be able to make quick animations without the huge headache of .fbx exporting and importing (even with the blender unreal addons https://www.unrealengine.com/en-US/blog/download-our-new-blender-addons , which work amazing for static meshes, kinda sketchy for skeletal). I kinda wish unreal had the option of slicing the animation list of an FBX and attempting bone resolution before importing. I really want to get this working because then my workflow stays in unreal for animating. Blender is still blows unreal out of the water for making meshes IMO but animations in Blender still seem hacky with the action editor.
There are a few things I’m actively annoyed with when it comes to control rigs (which I wont show you here because I’m still WIP with this one.) I’m also a straight up control rig novice so I bet as I learn these problems might be solved with better practices.
1.) You cant manipulate the control shapes in the editor preview window. Seems like that would be an easy addition and should match the same kind of workflow as the “hold control to bump physics asset” thing.
2.) Control rigs are effected by bones. This one I get WHY but it seems counter-intuitive that you would ever want to make a rig that is controlled by a parent bone. I get the idea of attaching to a bone in order to have like a small sub object (for example a turret on a large ship).
3.) When you add a control to a bone it adds it to the children of that Bone. This would be fine if #2 wasn’t a thing.
4.) Adding Default forward solve assignments are not automated. I bet I could find a python script to do this but still, that blueprint tower of power really can and should be made upon generating a new control for a bone.
I have tried to build a source engine mod probably 5-6 times since I started programming in late 2000s early 2010s and I’m finally ahead of the curve and I was able to get a mod built before valve did something that wasn’t in the public c++ repos that broke everything! (See https://github.com/ValveSoftware/source-sdk-2013/tree/master)
This is not an accomplishment but I’m happy it’s possible. Will I do anything with this? Probably not….
What happens when I launch the mod??
uhhggg…. I could figure this out but honestly if I’m diving into source I’d rather start with source 2 and cs2. But there’s a 10 year old part of me that longs to make a source mod, put it on moddb, start a dev blog, abandon the mod, notice another team picked up my mod, start a mod that competes against the original and fall into a deep depression when the original mod team gets hired by valve.
Made a data display using threes.js for a cool display for alerts over time. The thought was that the radial view would be displayed in a corner then would unravel as the user clicked the window to display a timeline of events.
There’s a hitch when converting from the polar plot to the linear plot which comes from me dynamically making text objects instead of having them in memory and hidden. Further work on this will be adding an optional screen space shader to make things look a bit more oscilloscope-y.
My friend is working on some stuff in rust and it’s popped up in my job a few times so I wrote an asio sine wave generator in rust to familiarize myself.
FYI the code below is not debugged/cleaned up and definitely has stuff that can be taken out.
use cpal::{
traits::{DeviceTrait, HostTrait, StreamTrait}, DevicesError, FromSample, OutputCallbackInfo, Sample, SizedSample
};
fn main() {
let host;
#[cfg(target_os = "windows")]
{
host = cpal::host_from_id(cpal::HostId::Asio).expect("failed to initialise ASIO host");
let devices = host.output_devices().expect("No devices Found");
println!("Searching for ASIO Devices...");
let mut counter = 0;
for device_from_list in devices
{
counter = counter +1;
let name: String = device_from_list.name().expect("YOU LITERALLY JUST GAVE ME THIS, IF YOU FAILED THE PROGRAM SHOULD HAVE STOPPED WHY IS THIS ALLOWED");
println!("{counter} : {name}");
}
let searchString = "Focusrite USB ASIO";
let device: Option<cpal::Device> = host.output_devices().expect("we already did this!").find(|x| x.name().map(|y| y == searchString).unwrap_or(false));
let device_ptr: &cpal::Device = device.as_ref().unwrap();
let devicePrintName: String = device_ptr.name().expect("How would this not have a name at this point.");
println!("Connected To {devicePrintName}");
let config_ptr:&cpal::SupportedStreamConfig = &device_ptr.default_output_config().unwrap();
println!("Default output config: {:?}", config_ptr);
//These can by dyanmic but I'm keeping them hardcoded to my fosurite settings.
let mut audioOut = [0_f64; 1024];
let mut currentTime: f64 = 0.0;
let mut freqOut : f64 = 1000.0;
let mut ampOut : f64 = 0.5;
let sampleRate : f64 = 192000.0;
let freqAsRad: f64 = 2.0*3.14*freqOut;
for (sampleIdx,sampleVal) in audioOut.iter_mut().enumerate() {
currentTime = currentTime + 1.0/sampleRate;
*sampleVal = ampOut * (freqAsRad*currentTime).sin();
}
let err_fn = |err| eprintln!("an error occurred on stream: {}", err);
let channels = config_ptr.channels() as usize;
let myStreamConfig = cpal::StreamConfig{
channels : 2,
sample_rate : cpal::SampleRate(192000),
buffer_size : cpal::BufferSize::Fixed(1024),
};
let stream = device_ptr.build_output_stream(&myStreamConfig,
getData2,err_fn,None).expect("Well I'm stummped");
stream.play();
std::thread::sleep(std::time::Duration::from_millis(1000));
}
fn getData2(output: &mut [i32], callbackInfo : &OutputCallbackInfo )
{
static mut currentTime: f64 = 0.0; //uhhgg
unsafe{
let mut freqOut : f64 = 440.0;
let mut ampOut : f64 = std::i32::MAX as f64 / 32.0;
let sampleRate : f64 = 192000.0;
let freqAsRad: f64 = 2.0*3.14*freqOut;
for frame in output.chunks_mut(100) {
for sample in frame.iter_mut() {
currentTime = currentTime + 1.0/sampleRate;
*sample = (ampOut * (freqAsRad*currentTime).sin()).trunc() as i32;
}
}
}
}
}
Anyone who knows rust probably wont like the way I wrote this, but it does what I set out to do. Which brings me to the stuff I don’t really like about rust on first impressions:
1.) It expects you’re going to develop multi-threaded applications: this is totally reasonable as that is the primary draw of using rust. However if you’re trying to pump out an easy single threaded connect + execute style of device control it can be quite frustrating trying to understand how an API functions.
2.) A lot of error handling: I’m cool with this concept but this is probably going to end up with me putting .except() or ? at the end of majority of my function calls.
3.) Optional static typing: this has always been a pet peeve of mine
The plus sides
1.) Cargo is great
2.) Forcing error management is probably a good practice
3.) Explicitly calling out mutability is a positive in my head
4.) Unsafe blocks are smart (because the primary goal is to make everything thread safe)
If you want this code to work you gotta use cargo to clone down cpal and follow the instructions here: https://github.com/RustAudio/cpal