My goal for this week was to split out the RFTS-specific code into modules in a reusable way. I think I managed to do this in a fairly nice way. A rule file can now specify a list of mods to be loaded. These mods contain code that moves the data from the cache into the constraint store and the processed data back to the cache. The mods can also specify additional rules, constraints and functions.
After a while of moving functionality around with this system, it seems very flexible to me. I currently have a "basic" mod that will copy all the object data from the cache to the constraint store. It would however be possible to write a "light" mod that only copies some data to the store each turn, but also initializes the store with long-term constraints for the static data, like star system locations in RFTS. It might be possible to save some time on rule matching that way. One could also write a "messages" mod that adds the message board data to the constraint store and allows one to post messages, and the same for a "designs" mod. I've also added some mods for specific rulesets. This allows, for example, to add a constraint about the turn type (in the 3 turn cycle) in a RFTS game.
I started researching the Risk ruleset being created by jphr for SoC in order to test how usable this modularization really is, and I immediately bumped into a big problem. Up until now, I didn't support any client/server communication in the rule evaluation. However, Risk seems to require probe orders to give information to the client about what planets are adjacent, so I had to take a look about implementing those (note: the new wormhole patch might have lifted this requirement, I should investigate). The final result is that I allowed the Python code that can appear inside rule files to use the cache, rulesystem and connection directly so that it is possible to place probe orders. It's a bit ugly, but it works. I assume mod and rule file writers will use these variables sensibly, as abusing them can confuse the bot pretty hard.
Today, I extended the documentation at the wiki a bit. It still needs work but I think that it might be possible for those longing for adventure to write their own AIs with the current codebase. Remarks on the documentation (or anything else, of course) are always welcome.
Oh, and Sunday I qualified for the second online round of Google Code Jam. I doubt I'll make the next round though unless I have a very lucky insight in one of the hard problems. Still, it's fun to be among the top 2500-or-so.
I'm a bit at loss on what to do next. Since RFTS has a nasty bug that seems to prevent colonization (reminder to self: I really need to file those bugs in the tracker) and xdotx doesn't seem to be around, I don't feel a lot for working on that ruleset. So I think I'll take a look at the SoC rulesets (Risk, T&E and DroneSec) to get a (basic) mod for each of them. Then I'll focus on one of them to create a decent bot. It'll probably be Risk since vi1985 is doing that too, T&E is too complex for my small brain and I don't know too much about DroneSec yet (apart from that it looks fun). I hope this will help the others a bit with their rulesets.
Tuesday, July 29, 2008
Modularity (week 8 status report)
Saturday, July 19, 2008
(week 7 status report)
I've lost track of the days. I heard it was Friday yesterday, so time for another report...
So, as said last week, I wanted to do longterm storage and perhaps some parsing this week. I finished longterm storage and updated the rfts-rules ruleset to remember the list of unvisited stars between turns. I also did the parsing and updated the rules to use a CHR-like syntax. A lot more readable and easy to write! And I made some small improvements to the framework.
Next week I'll try to separate the RFTS-specifics from the framework. I'm not entirely sure yet how the end-user interface will look like, so I'll have to think a bit about that first. But when that is done, the framework is essentially done. I can then start giving some more interesting behavior to the bot and let a couple of them fight against each other.
Which reminds me. I've made a small video (using Starmapper - fun thing btw) of daneel-ai colonizing an undefended universe. Going on a bit about starmapper: the main problem here is it can only download information from one player and thus needs a guest account with full visibility to be able to view and map everything in a multiplayer game. A nice project would be to add some sort of report dump to the server, which can then be used by Starmapper to generate the pictures. The reports/pictures would be disabled (or inaccessible until the game has finished) for public servers. Another option would be to allow Starmapper to log in with different accounts and download and merge the information, but that doesn't sound as elegant. Something to think about, perhaps.
Monday, July 14, 2008
Tension rises (week 6 status report)
It's a bit overdue, yeah, but let's see anyway...
So, from last week's status report, the plan was to externalize the rules to show it really is a framework. Well, I did that. And I then noticed my bot was slow, so I worked a bit on that, mainly because testing was not fun. But it still was not good enough. I then rewrote part of the bot to use an external constraint processing system. It's still slow for some forms of rules (not entirely sure why), but until now I managed to evade those.
I then added types to constraints. This allowed me to parse arguments of rules into Python types so I can use ints and tuples as they are meant to be used in the guards and bodies of rules.
On advice (or should I say "slight pressure") of Mithro, I then set out to create a bot that interacts a bit meaningful with the universe. It now manages to build fleets and colonizes planets. It's still not smart and mostly relies on flooding the universe, but that's a matter of writing good rules now, which goes rather fast.
I said slight pressure, because Mithro told me he didn't really like the way my project was going. He's right, of course. I didn't really focus on a deliverable AI because somewhere along the way, the focus had shifted to a framework in my mind. I suppose I failed to discuss that properly. Anyway, I tried hard to show my bot does work, so I hope mithro and nash will give me their confidence for the next half of Summer of Code.
Anyway, next week I'll implement long-term storage and rework the rules a bit to take advantage from it. I'll then finalize the parsing so it's possible to write human-readable rules and I'll then focus on writing a bot that can set up a fight.
I've also discovered a few bugs in various parts of Thousand Parsec, among which a few ones in RFTS. That was part of the goal of the project (to make RFTS more stable), but it's annoying nonetheless.
Wednesday, July 9, 2008
A crawling program
My bot is slow. Too slow. It takes up to 10s to apply 3 rules on a modestly sized fact database. And any bot that wants to have some good strategy needs at least a few hundred rules. And yep, time needed is roughly linear in the amount of rules, so that would take too long.
However, a bit of thinking shows why this is so slow. I can't believe I didn't notice it earlier, but it's very apparent. I'm using a very slow generate-and-test algorithm to find matching facts to my rules. Stupid, stupid: I've had classes every year where constraint propagation, backmarking, backjumping and other speedup techniques are explained and when I need those techniques, I forget all about them.
So, onwards to rethinking the fact matching. I just didn't think of it as a constraint satisfaction problem, but that is indeed what it is, and that is how it should be solved.
I think I should also revisit my database course, as things like join and selection ordering could be important here. Perhaps I could even make the whole constraint store use a (simple) RDMS instead of my own solution. It sounds like overkill (and it probably is), but might be something to keep in mind...
Tuesday, July 8, 2008
Back to action (week 5-ish status report)
Last week, I didn't manage to do a lot, again. I had a weekend away with a couple of friends, and then a small week off to prepare next year of student's organization. I managed to put in a few hours of work but I was so tired I didn't get a lot farther. Yesterday, I got back on track. I made my tests run, which is an impressive step forwards, and today I started on a basic parser. The parser can currently handle rules in "head normal form". The step to readable rules theoretically only exists of simple rewriting: from
ship(1) \ moving(10) <=> attacking(10)to
ship/1 \ moving/1 <=> _var_0_0 == '1' and _var_1_0 == '10' | attacking(10)with the latter being the head normal form (the form you have to write rules in right now). I'm not going to bother with that right now though, it's non-critical and low risk.
For the remaining of next week, I'll try to externalize the rules and functions to separate the bot framework and its logic. This will allow multiple instances of the bot to run with different "personalities". Keeping midterm evaluation in mind, this seems like an excellent goal as it basically finishes the rough framework and splits the task in 2 subtasks. I also need to search for a few hours to find a picture of myself that the internet can see. Horrible task.
Also worth mentioning is that I'm done with taking loads of time off the project. For now, I'll be able to work 10 or so hours a day. That's about what I did in a whole week the last few weeks. So productivity should lie a bit higher from now on. Hooray!