![]() |
![]() |
![]() |
![]() |
As most of us, I spend lately considerable amount of time researching and talking about AI implications (for coding and beyond).
It’s a big subject. The technology offers us various applications, and various new ways for misuse too.
I think the following statement manages to summarize my hopes + concerns best: When working to solve the problem, we should be gaining understanding, not losing it. So use AI to get smarter / faster, not to loose understanding / decrease code maintainability.
- I have updated our AI usage guidelines with this and many other statements — both positive usage of AI (e.g. “review my uncommitted changes”) and warning about negative (trusting the AI designs, which may go in unmaintainable direction without supervision).
- I think we can use these tools carefully (understanding AI proposals, and correcting / filtering out bad ideas) to our benefit, and I was experimenting with doing it on many projects now (including our engine).
-
Being careful is certainly warranted. Committing code that “looks good on the surface”, but fails to scale, fails to be maintainable, or is not understood -> that’s a road to being unable to develop the software in the future (both with AI and with your own brain).
-
Additional reading material:
You all likely have read a ton of AI things on the Internet lately 🙂 There is a lot of noise, over-hype (no, AI is not a replacement for human developers, and I don’t think it will ever be, though our role may change; remember there is no “production” app vibe-coded as of now) and hate too. And some hate I agree with (AI companies made a huge copyright theft on unprecedented scale; they suck) but it doesn’t mean that AI is useless as a tool.
Some of the references I agree with:
- Kurzgesagt video warning about AI slop. There is a real problem of welcoming hallucinations into our knowledge base (Internet).
-
I do think that one answer is we need search engines trying to “filter out” AI slop (not amplify it), and I’m a happy user of Kagi and agree with their POV about LLMs.
-
Recent article from Addy Osmani “The 80% Problem in Agentic Coding” is a balanced viewpoint I agree with — highlighting both the hope and the dangers of AI agents in coding.
-
Interview with Pi devs has lots of valid observations about AI and coding. (Thanks Szymon for the reference!)
-
Links to example situations in the context of Castle Game Engine how AI hallucinations make bad code/docs, here.
-
I have committed CLAUDE.md into the engine repository — it contains a number of instructions to steer Claude Code into following our practices.
-
Finally, something you’ve been waiting for:) I have tested “vibe coding” an application using Castle Game Engine. Even if I explicitly don’t think it’s a good way to make a game, it was a fun thing to try!
- Head on to https://github.com/michaliskambi/castle-vibes/ for the complete code and data, screenshots, and a long README with observations.
-
Builds (Linux, Windows, web) are available.
-
It is maybe the worst game I ever made/played, but it is playable, which is impressive in itself. You can walk around a winter scenery with a 3D castle, fog, falling snow, some forest, minimap. You can get a quest to slain 6 goblins, and do it, and at the end you get the reward – trophy appears.
-
Observe many things that AI did correctly, but also many done wrong. Various bugs, generating UI using Pascal code, generating models using Python, unoptimal (but still looking cool for this demo!) snowflakes.
-
It’s not a game I designed or control. It’s a random game. Features, look, details (placement, quests) of everything -> is an outcome of a random process, as you can only specify so much in ~15 paragraphs in English. We can speculate how much the quality will improve “out of the box” with better AI agents in upcoming months/years, but ultimately if I want to control over the game -> I will need to put much more work into spec/prompting or “get my hands dirty” and just edit the code/data myself. At some points, “just edit the code/data myself” will be more efficient, at the end we’ve made the engine to be easy to use. Pascal code is the most precise “spec” 🙂 So I don’t see the future as “just vibe code me this app”. The real applications will be a result of iteration on the solution with both AI and manual work.
-
Oh it’s such a mixture of good and bad feelings 🙂 Please read the full README with observations.
Have fun using our engine and please support the humans behind the engine 🙂.




Many people are experimenting and using ai in the game engine, and I’m surprised that you’re only now taking it seriously)
Only now?
You didn’t follow our news then:
I did spend quite a lot of time researching how to use (and how to not use) AI in recent years
AI is fooling fools. I have tried producing very simple routines for stuff like rotation and AI has been wrong every f’in single time. Sometimes it gives you a clue to get you in the right direction. Most of the time it makes up stuff that doesn’t exist. I had hoped AI would be more useful as a lone dev with an impossibly large goal. But it was only a waste of time. I support you as a human, you are smarter than any stochastic parrot no matter how much electricity and water and training you give it.
Many people are finding out AI is oversold generator of slop code. Experts realize very quickly that is a scam. AI is useful for looking up Castle Game Engine stuff… because CGE has so much documentation. But I wouldn’t have it write even a single line of code. I made some money last year fixing somebody’s Claude corrupted codebase.
Thank you @michalis for taking your time to compile your findings. Also the game sounds like fun, I’ll check it for sure.
In my own words, I’m kind of a fan of the AI in some fields, but my rule of thumb is: “If I don’t know the field, I don’t ask AI”. Because, if I can’t confront the AI, then it can tell me whatever - how could I say it’s true or not.
Computer aided programming an AI are much older, with roots in 1950s - and heuristics were the backbone of then AI. Cognitive psychology (also born in 1950s) was comparing a human to a computer (brain=CPU, memory=RAM, problem solving=algorithms, etc.). It directly influenced the AI, expert systems, and neural networks development.
The first AI I know (non-neural based) born in 1970s worked with IF-THEN rules, and like every other AI (including todays LLMs) used heuristics. I know it only because I was learning Prolog. Other expert systems already existed, mainly based on Lisp language.
Somewhere around 1970s the early AI and the comparison of biological systems to machines resulted in something we use everyday now - Object Oriented Programming. Objects were described as biological cells, or sometimes as atoms, with “methods” (attached procedures) and “properties” (data). The modular, organic structure encapsulating the logic and knowledge are quite identical to what symbolic AI was.
In 1990s UML and code generation were born. Even when we look at Delphi, it ~always had wizards to create code templates. Many of us, I guess, had their own made-at-home software to ease creation of components. Soon later more sophisticated AIs started appearing, quickly bringing us to today’s generative headache.
None of this never replaced developers, but was very helpful, and influenced the ways we can program today. So I do agree with @michalis statement that we’ll not get replaced, it’s just our role may change. Well, at least most of us.
I don’t use AI for coding, because I simply enjoy every line of code I create. It’s so nice to figure out a solution, type it, and see it working - it’s an art. I wouldn’t ever learn programming if AI was doing the reasoning for me. Plus, when I create something, I understand it - AI typically does not. AI only calculates a chance it’ll work, cutting off all reasoning branches that are less promising (due to heuristics), and the very next morning it doesn’t even know why it did it.
In regards to CGE, which is so good that I fully incorporated it in my current project, some internal stuff is still unclear to me. So to help me understand it better, I have used AI few times to analyse CGE’s code.
But, too many times I was told that “this unit contains this class”, but then neither the file nor the class existed. When confronted, AI’s always is like: “You’re right, it doesn’t exists. I assumed basing on naming conventions”… Since then, I prefer old, good “get dirty and dig it up” manual method in most cases.
When working with unknown code, either we are 100% sure of what it does, or we assume “I don’t know” and we dig deeper. AI always assume it knows. I hope it gets improved for documenting and analysis purposes. I second @edj’s words about how vast the docs are. CGE is big, and even the best documenting practices here, plus inline comments that Castle Engine has, it’s not always enough. Quoting Russell L. Ackoff (1973) “A system is more than the sum of its parts; it is an indivisible whole. It loses its essential properties when it is taken apart.” Analysing parts in isolation can never fully explain the behaviour of the whole.
AI can be very helpful, I don’t say no, but it’s costing probably more time guiding it and listening to “Oh! Your idea is so perfect” than it’ll take to trial-and-error my own code. Also, with Pascal code, the AI doesn’t seem to understand some things properly yet, but I didn’t use Claude so it may be better. The AI’s proposed algorithms are usually good, but as an algorithm not a copy-paste code when it comes to more complicated stuff.
Considering games, I wouldn’t be happy to pay for a game developed by AI with AI generated artwork. Different thing is to get some assistance from a machine - after all, we welcomed code completion and insight tools with relief, also artists use photoshop or GIMP’s ‘magical’ tools. But it’s a whole different story when someone type “Hey AI, do a game for me. I’m busy with laundry”. I strongly believe humans, real artists and developers deserve to be recognised - and paid - for their effort, skills, and often ingenuity. If some studio rely on AI heavily, kicking out humans, telling us that it’s so hard to get good programmers, then the question is about their HR skills in finding good staff, and about their moral standing.
A free-game released as a novelty, or a proof of concept, or research I don’t mind. It’s actually fun. I remember using neural networks years back to cheat in some simple arcade game. I also have tried heuristic methods and CNNs to help me with coding too - but I did it in order to learn the fascinating techniques and improve my own programming skills, not to replace my them. And that’s how I still see it today - computers should aid us, teach us, but not replace us.