Thursday, October 26, 2017

glTF 2.0: I Like It!

Although I'm not a 3d graphics person, I have worked with several 3d file format1234567891011. In general, I've been very disappointed in the design of these file formats. But I've finally found a 3d file format that I've liked. glTF 2.0 is actually pretty nice.

It's a mostly straight-forward, easy to understand file format that's pretty unambiguous. It doesn't try to implement any fancy features like U3D. It doesn't contain weird legacy baggage like X3D or COLLADA. Its design isn't so overly configurable or flexible that it's impossible to know whether what you store in it can be read by other programs like COLLADA or TIFF. It just holds a bunch of triangles and associated data structures. It seems like it was built from the ground up as a proper file format for interchange instead of growing out of some existing system with all sorts of strange behavior based on how the codebase for the original system evolved. It also has good extension points making it easy to store additional application-specific data in a file.

I think part of the reason why it came out so well is that it was originally designed for one purpose only: for sending 3d models to be displayed by WebGL. With a well-defined and basic use case, the designers had the focus to make something straight-forward and easy to work with. With glTF 2.0, the file format has been extended to support more general use cases, but the core use case--holding 3d models--hasn't been diluted by that. Storing 3d models in glTF 2.0 is still clear and concise without a lot of confusion.

I still have a few niggles with it though that could be improved. Right now, the file format doesn't have widespread support yet, but it is starting to grow. Still, given that this is a file format specification, I feel like there should have been at least one proper reference importer/exporter for the file format before it was finalized. There are multiple implementations of the spec, which is good, but none of the implementations are complete and comprehensive and allow for a proper bidirectional interfacing with a proper 3d application, so it's just hard to know if the files I've created are correct or whether all the corners of the file format has been fully tested.

Some parts of specification don't really give proper explanations or context for why they are needed. For example, I still don't understand why accessor.min and accessor.max exist. Like, I'm sure there's a good reason, but they just seem like an unnecessary hassle to me. Especially given that it's impossible to properly encode a 32-bit floating point number as a decimal string, I just can't see what use an inaccurate record of the min and max x,y,z values of some points are. Having more context there would be useful for implementors. Another example are the different buffer, bufferview, and accessor objects needed to refer to memory. It took me a long time to figure out what the difference was. At first, I thought you could the data for everything in a single bufferview, and just use different accessors to refer to different chunks of it. It was only later when I read that bufferviews were intended to refer to OpenGL memory buffers did I finally understand what each level of memory reference is for. The different buffers are meant to refer to different data stored on-disk. Usually, you'll only have one buffer, but if you have different models that share data, you can put this shared data in a separate file/buffer that those two models can share. The bufferview refers to a single in-memory chunk of data loaded into memory for a model. So, having a single bufferview for an entire scene would be wrong. You would normally have one or more bufferview for each 3d object in a scene. In general, when accessing data from a bufferview, you would always read from the start of the bufferview. If you find yourself reading from an offset into the bufferview, then you should probably just use a separate bufferview instead. The accessors describe how to read individual data fields of a bufferview. Notably, the bufferview contains a byteStride property that allows a bufferview to be broken up into different records or entries. An accessor describes how different fields are stored/interleaved inside a record or entry of a bufferview. An accessor's byteOffset is supposed to be used for offsets into a record or entry, not for starting at an offset into a bufferview.

glTF 2.0 also offers a convenient format for storing all the 3d data in a single file called GLB. The GLB specification is nice in that it's really basic and straight-forward, but its design is a little sloppy. The GLB file format has its file size encoded in it, which is unnecessary and prevents the data from being streamed. Even if that were fixed, the design of the chunks inside the file also prevent being able to write out the data in a single stream. All the parts of the file have to be written out separately first, their sizes determined, and then they can be assembled and written out into a GLB file. This is caused by the fact that there can only be a single buffer chunk, and the JSON chunk (which will contain references into the buffer chunk) has to be written out before the buffer chunk.

Overall though, I really like the glTF 2.0 format. I really hope it gets widespread adoption. I definitely see it displacing the .OBJ format in the long term.

Thursday, July 27, 2017

WKWebView for Clueless Mac Programmers

I've been recently trying to package up my vector design web app Omber as a Macintosh app. Unfortunately, I had zero knowledge about Mac programming. Like, I never owned a Mac. I didn't even know how to get the cursor to go to the start of a line or skip a word using the keyboard without having to look up Stack Overflow. I tried using Electron, but after spending a long time going through various documentation to rebrand and package things (the nw.js documentation is so much better. The nw.js documentation is always such a joy to read compared to the electron docs), I wasn't too satisfied with the result. It worked, but it was sort of clunky, and I think there was some weird sandbox thing going on that caused file reading to sometimes work but sometimes not. With Windows, it makes sense to use Electron because the Windows default browser engine has weird behaviour and not everyone has the latest version of Windows. But on the Mac, everyone gets free OS upgrades to the latest version and the browser engine is fairly decent, so there's no need to include a 100MB browser engine with an app. So I figured I could whip together a quick Mac application that's just a window with a web view in it in about the same amount of time it would take to debug the Electron sandbox issues.

<rant about Mac programming>
Programming for the Mac is just like using a Mac. Apple hides important details and tries to force you to do things their way. Apple keeps changing things underneath you so all the documentation online or in books is always vaguely out of date. It's also expensive. I bought the cheapest Mac mini with 4GB RAM and a hard disk for development, thinking I could do mostly command-line stuff, but that's not the case. You really need to work from Xcode, and Xcode is a pig of a program that takes up a lot of RAM and is sort of slow. I almost immediately had to switch to using an external SSD on USB to get any reasonable responsiveness from my system. Apple is really trying to stuff Swift down everyone's throats, but I opted to go with Objective-C because of my Smalltalk background. It's not bad except the syntax is somewhat awful. My main issue with it is that part of what Smalltalk so productive is that it comes with an advanced IDE that's super fast and makes it easy to browse around large application frameworks to figure out how to use an API. Objective-C comes with an overwhelmingly huge application framework as well, but Xcode is slow and pokey and doesn't come with good facilities for diving through the framework. Code completion is not good enough. There should be a quick way to find how other people call the same method, check out the documentation for a method, and to check out the inheritance tree. Xcode is more of a traditional IDE with some code completion. It would be nice if Xcode actually labelled all of its inscrutable icons too. No one knows what any of those buttons mean, but using those buttons isn't optional either. The latest MacOS/OSX versions do include a web view, but I always get the feeling that Safari developers don't really understand web apps and want to discourage people from making them. I find that they only implement just enough features to Safari support their own uses and then lose all interest in implementing things in a general way that can have multiple uses. For example, for the longest time, they refused to implement the download attribute on links because Apple didn't need it, so why should anyone else need it? Then, when they did implement it, it initially didn't work on data-urls and blobs because they didn't understand how important that was for web apps. Similarly, the new WKWebView initially could only show files from the Internet and not load up anything locally, making it useless for JavaScript downloadable apps. Then, even when they did fix it, things like web workers or XMLHttpRequest are still broken, really limiting its usefulness. 
</rant about Mac programming>

Anyway, I found a great blog post that shows how to make a one window app with a web view in it. It lists every step, so it's easy to follow along even with no understanding of Mac programming. It worked for me, but Xcode has changed its default app layout to use storyboards so some of the instructions don't work any more, and it used the old WebView which is very limited. The new WKWebView is better because it allows for JIT compilation of the JavaScript, and it comes with proper facilities for letting the JavaScript send data to native code (the old web view required a bad hack to do that). So here are some updated instructions:
  1. Get Xcode and start it up
  2. Create a new Xcode project
  3. Make a MacOS Cocoa Application
  4. Fill in the appropriate application info, choose Objective-C for the language
  5. That should bring you to the screen where you can adjust the project settings
    1. If you want to run in a sandbox, I think you have to turn on signing. I think Xcode will take care of getting the appropriate certificates for you (I had already gotten them earlier).
    2. At the bottom of the General settings, under "Linked Frameworks and Libraries", you should add the WebKit.framework
    3. In the Capabilities tab, you can turn on the App Sandbox if you want (I think this is needed for the Mac App Store). Be careful, there seems to be a UI bug there. Once you turn on the app sandbox, you can't turn it off from the UI any more.
    4. If you do enable the App Sandbox, you also need to enable "Outgoing Connections (Client)" in the Network category. This is required even if you don't use the network. WKWebView seemed to have problems loading local files if the network entitlement wasn't enabled.
  6. Go to your ViewController.h, and change it to
  7. #import <Cocoa/Cocoa.h>
    #import <WebKit/WebKit.h>
    
    @interface ViewController : NSViewController
    
    @property(strong,nonatomic) WKWebView* webView;
    
    @end
    
  8. When using storyboards, the app delegate doesn't have direct access to the view, so you have to control the view from the view controller instead.
  9. Then go to your ViewController.m. Usually, you would draw a web view in the view of the storyboard and then hook it up to the view controller. Although this is possible with the WKWebView, all the documentation I've seen suggest manually creating the WKWebView instead. I think this might be necessary to pass in all the configuration you want for the WKWebView. To manually create the WKWebView, add these methods that show a basic web page:
  10. - (void)loadView {
        [super loadView];
        _webView = [[WKWebView alloc] initWithFrame: 
            [[self view] bounds] ];
        [[self view] addSubview:_webView];
        // Instead of adding the web view as a subview as
        // in above, you can also just replace the whole 
        // view with the web view using
        //     [self setView: _webView];
    }
    
    - (void)awakeFromNib {
        [_webView loadRequest:
            [NSURLRequest requestWithURL:
            [NSURL URLWithString:@"https://www.example.com"]]];
    }
    
  11. Now when you run the program, you should see the web page from example.com there.
  12. The next step is to create a directory with all your local web pages that you want to show. Create a folder named html in the Finder (i.e. just a normal folder somewhere outside of Xcode)
  13. Drag that folder onto your project in the file list. Enable "Destination: Copy if needed" and "Added folders: Create folder references"
  14. You should now have a html folder in your project. You can delete the original html folder that you created earlier in the Finder since you no longer need it any more. (You can confirm that the html folder will be included in your project properly by looking at your project file under the Build Phases tab, and  the html folder should be listed in the Copy BundleResources section)
  15. Create an index.html file in your new html folder. Put some stuff in it.
  16. To show that page, go to your ViewController.m and change the awakeFromNib method to this:
  17. - (void)awakeFromNib {
        NSString *resourcePath = 
            [[NSBundle mainBundle] resourcePath];
        NSString *htmlPath = [resourcePath 
            stringByAppendingString:@"/html/index.html"];
        NSString *htmlDirPath = [resourcePath 
            stringByAppendingString:@"/html/"];
        [_webView
            loadFileURL:[NSURL fileURLWithPath:htmlPath]
            allowingReadAccessToURL:
                [NSURL fileURLWithPath:htmlDirPath isDirectory:true]];
    }
    
    

Saturday, June 03, 2017

Nw.js vs. Electron

Today, I tried porting an html5 web app of mine into a desktop application. When it comes to running JavaScript programs on the desktop, there are two main choices: node-webkit (nw.js) and Electron. I wasn't sure which one to choose. I didn't think that my web app was very complicated, so I decided to use nw.js. It's simpler, older, has an easier programming model, and I've been happy when I've used apps based on nw.js in the past.

Using nw.js was great. It was so simple and easy to use. I just unzipped nw.js somewhere, dropped my own web pages there, and off it went. It was nothing like the days and days of agony involved in making a Cordova app. The amount of documentation was very manageable, so I was soon diving through it to figure out various polishing issues. And it was all pretty simple. Fixing the taskbar icon was one line. Making it remember the window size from when it was last closed--also one line. Putting in a "save as" dialog was a little more work, but, again, nothing to sweat about.

Then, I decided that I wanted the save dialog to default to saving to Windows' My Documents folder. And that was hours and hours of agony. The nw.js API is pretty small, so I went through all the documentation with a fine-tooth comb, looking for how to do it, and I couldn't find anything. I then thought that maybe that API was in node.js, so I went through all the node.js documentation to find out how to do it--nothing. Then I thought there might be an NPM package to do it. After much searching, I turned up nothing. I think most people use node.js for server stuff, so they never need to store stuff in a user's Documents folder.

After hours of this, I took a peek at Electron, and it was right there. Electron has an extensive platform integration API for not only getting at the documents folder, but also for crazy MacOS dock stuff, etc. Electron is used by bigger companies that ship more complicated applications, so they care deeply about all the subtle platform integration issues needed for a polished app. As a result, Electron has a much deeper and much more extensive platform integration API than nw.js. Of course, the Electron programming model is more complicated than nw.js, so it seems like it will require a lot more code to be written to get things going. And there's a lot more documentation, so I don't think it's possible to read it all, like I could with nw.js. And I'm concerned there might be annoying configuration issues. But it looks like I'll have to move to using Electron.

So if you need extensive platform integration APIs, use Electron, despite the fact that it's more complicated. If you're making something more self-contained, like, say, a game, then nw.js is probably fine though, and you'll save time because it's so easy to set-up.

Update (2017-6-7): Apparently, there's another difference in philosophy between nw.js and Electron too. nw.js tries to create a programming environment that imitates a normal web environment as much as possible. Platform integration is implemented as minor embellishments on existing web APIs with reasonable defaults chosen. With Electron, using normal web APIs will work, but not well. Lots of platform integration features are available, but the programmer has to explicitly write separate Electron code to take advantage of those features, and the API isn't that nice (due to Electron's multi-process architecture and lots and lots of optional parameters). For example, to open a file dialog in nw.js, you can simply reuse the existing html5 file dialog APIs, and the return results are augmented with some extra path information that you can use to open files. To open a file dialog in Electron, you can't reuse your existing html5 file dialog code because Electron's implementation is missing a couple of features, so instead you have to make use of Electron's file dialog APIs. Electron's file dialog APIs are fine, but a little messy to set-up, and by default they aren't modal, so you have to jump through some hoops to get normal file dialog behavior.

Update (2017-8-28): Despite what some people say, the nw.js documentation is much better than the Electron documentation. Electron has a lot of documentation, but it's not well-written. For example, I found the Electron docs would often just list a bunch of method names and method parameter names without really saying what the parameters do (this is similar to the node.js documentation, actually). The documentation that Intel and others have provided for nw.js is very clear and almost a pleasure to read. To show you how good the nw.js documentation is, when I was making a Mac App Store version of an Electron app, I consulted the nw.js documentation on how to do it because the nw.js documentation was just so much more clear and detailed.

Update (2019-9-11): This is only indirectly related to this topic, but I gave a talk earlier in the year on the topic of Java on JavaScript VMs. As part of the talk, I give a survey of different ways of running JavaScript on the client such as Cordova, nw.js, Electron, WKWebView, UWP, etc. Just to warn you in advance, the talk is intended for a Java users group, so the tone of the talk is light-heartedly derogatory about JavaScript. But it's intended in good humor--don't take it too seriously.

Friday, May 12, 2017

Building a Basic City Builder

This post is me rambling about trying to understand how city builder games work by making a very, very simple city simulation model.

I love city builder games, but I always get very frustrated playing them. I think the problem is that most game designers and programmers of city builder games don't actually obsess about cities and don't spend hours reading about and thinking about cities. Now, Will Wright, the designer of the first SimCity, did spend a lot of time reading about the philosophy of cities, and the original SimCity was a great simulation for a game that had to be able to run on a 4.77Mhz computer. It was built around themes of how residents needed a balance of residential, commercial, and industrial zones and of how land values and transportation access was important. Later city builder games do include much more complex city models, but I find they lack any over-arching theme or philosophy about the nature of cities. How is it New Urbanism, one of the biggest movements in urban planning over the last few decades, not have any of its tenets reflected in any of the most recent city builders? The designers of the latest city builder games simply focus too much on the game aspects of city builders and not enough on the urban planning. They seem to design game mechanics and simulation parameters based on what seems "fun" instead of reflecting upon a philosophy or theme of what constitutes a city.

Part of the joy of cities is that they are full of stories. Every neighbourhood has a story about how it evolved and grew and all the little things that people do there. Where do people shop? How do they get to work? What do they do for fun? Every city has a different story. But most city builder games have their simulation parameters set in such a way that they can only tell one story. For example, SimCity 4, which I consider to still be the pinnacle of the SimCity series, has its simulation set-up in such a way that you almost inevitably end up with a city that looks like northern California. The simulated residents are highly biased in favour of driving, and you have little ability to influence that. The city simulation is resistant to the levels of densification typical of non-American cities. The simulation doesn't allow farms in built-up areas, but I encountered plenty of urban farms when I lived in Switzerland. Even a basic assumption of the game like the fact that you need to supply water and sewage infrastructure to have even a basic level of housing development isn't actually true. Dubai was able to build many towering skyscrapers that weren't hooked up to a sewage system. All of these sorts of assumptions and fixed parameters in the city simulation constrains what sort of cities can be produced in the game and restricts the types of stories that players can tell. Even worse, SimCity 5 completely abandoned all pretense of accurately simulating a city and embraced purely game-based mechanics for modelling cities.

There's a lot of buzz about Cities: Skylines, which was made by a Finnish developer that previously made some awful transportation simulation games. Their transportation simulator never worked well for trains and buses, and it still doesn't, but they did get it to work well enough for cars that they were able to make a financially successful city simulator. Similar to how the developers built many transportation games that focused on modelling minute details of bus scheduling and bus stops while completely missing the big picture understanding of how mass transit actually works, Cities: Skylines has a detailed city model underneath that provides a simulacrum of a city when viewed at scale, but has no meaningful philosophy in its design and doesn't make much sense when you poke into it. One of the major aspects of the simulation models the player's ability to move dead bodies through the city! I'm currently living in Toronto, and I can't help but think that Jane Jacobs would cry if she know how many YouTube videos there were of people building multi-block, multi-story so-called "optimal" intersections in Cities: Skylines. The sheer prevalence of these videos is a sign that the underlying simulation model and theme of the game is broken. Note to armchair city builders: if you're building a continuous flow intersection in your city, you've already failed.

Of course, it's easy to complain about things. It turns out that I don't really know how to make a better system. Although I think I figured out how to make reasonable transportation models many years ago, I've never figured out how the underlying economic models of city simulators should work. In fact, I'm not entirely sure how the economic models of existing city simulators are designed. As such, it's hard to know what their underlying assumptions are, how they might be wrong, and how they might be fixed. The economic models of games are obviously biased in favour of growth. If a player lays out tracts and tracts of residential zones in the middle of nowhere, people will suddenly build houses in those zones for no apparent reason. Admittedly, in many places in the world, this is a reasonable assumption. In the areas near large, booming, metropolitan centres, if the government were to spend millions to build out sewage, power, and highway infrastructure to an area and then zone it for a subdivision, developers would quickly build tracts and tracts of suburban housing there. And for gameplay purposes, it's important for the city simulation to be biased towards growth because players love the feel of an expanding city where bigger and better things are constantly being built (though playing a dying city where the infrastructure must be slowly rolled back as people move out and where its role has to be reinvented might make an interesting scenario). But is this biasing towards growth done in a heavy-handed way that restricts the ways that a city can evolve or in a subtle way that still lets players design a city the way that they want?

To get a better insight into the way these economic models might work, I dabbled a bit in reading academic papers on urban planning models, but I never could figure them out. I tried out a trick I figured out in high school and tried to find the oldest paper I could find on the subject, and I actually found one that was somewhat comprehensible: Kenneth Train's "a Validation Test of a Disaggregate Mode Choice Model." My takeaway from the paper is that real-world urban planning models are based on polling of a population and building statistical/fitting models of how this population weighs decisions on choices they make on where to live or get around. For people building a computer game simulation, then a micro-economic agent simulation should capture this. Basically, you have a statistical distribution of how for every 100 people, 30 of those people prefer a house with a yard, 20 people choose their home based on the quality of the schools, 35 need to live within 10 minutes of work, and 15 like having a lot of cultural amenities. Then during the game, you randomly generate people based on the statistical distribution, throw them into the city, and have them make individual choices based on their preferences. Then, you just have to choose an appropriate statistical model of people to get the biases you want for your game. In hindsight, this is pretty obvious. If you model a bunch of individual, different people, then in aggregate, you will get an accurate city model. This still left a big problem though. This agent simulation will accurately model the residents in a city, with all of its assumptions explicitly encoded. This approach doesn't really work in modelling a city's growth. Why do people move to a city? How do you bootstrap an initial population for a city that has no buildings, no residents, and no infrastructure? If a game just regularly generates random people based on a statistical distribution and throws them into the city, then the whole simulation is inherently biased towards growth again. It seems like too blunt an approach to the problem. Surely, there must be a more nuanced way of modelling growth that has a better philosophy behind it other than the theme of unlimited growth? Is there a way of modelling growth that provides more adjustment knobs that can be used to encode different assumptions about growth?

I thought about this growth problem for a few years, but I could never make any headway with it. Regularly generating new random people to move to a city in order to get some growth might work for competitive games of SimCity. If there are different cities with different amenities, then depending on how your city compares to others, newly generated people might choose to move to other cities instead of your one. This sort of model might work well for multiplayer competitive SimCity. But I couldn't figure out how a growth model would work for a single-player city building game. I decided that the only way to find a reasonable approach to this problem would be to actually build a small game where I could dig into the details of the problem. Hopefully, after being enmeshed in the details, I would be able to see something that I couldn't see from far away. I came up with a design for a small city simulator that would focus on the economic model (since I felt I already understood how to design the transportation model), and then it was just a matter of finding the time to build it.

Finally, last week on Thursday, I received a last minute e-mail saying a spot opened up at the TOJam game jam that was running during the weekend, so I decided that it was time to dive in. I had worked out a design for a simplified city builder earlier. The city builder would present the side view of a single street. Since the focus was on the economic model and streetscaping and not on transportation issues, there was no need for a full 2d city. Having a side view also meant that the game could have a simplified interface as well that might even work ok on cellphones. In the game, players would place individual buildings and not zones. I think most city builder players like to customize the looks of their cities, but placing individual buildings doesn't work well at a large scale. But on the small scale of a side-view game, I was hoping that placing individual buildings would be feasible. During the first day, I was able to finish coding up a basic UI that would let players plop buildings on the ground and query them. There was a floating artist at TOJam, Rob Lopatto, who drew some amazing pixel art for me of a house and two types of stores, which really looked amazing.



On the second day, I coded up a basic traffic model. Since I was just trying to make something as simple as possible in a limited time, I only modelled people walking between buildings at a fixed speed. Similar to SimCity 1-4, I modelled the aggregate effect of people walking around instead of actually modelling the specific, individual movements of those people on the road. I think the lessons of SimCity 5 and Cities: Skylines is that modelling the movement of individual cars can be slow and leads to strange anomolies, especially when there is extreme traffic. In real-life, during extreme traffic, people shift their schedules to travel during non-peak times or they change routes or they move. It is rare for a traffic situation to become so dire that people end up in multi-day traffic jams and never reach their destinations. The problem with modelling the aggregate effect of traffic is that the simulation simply outputs some traffic numbers for chunks of road. There's nothing to see, and players like seeing little people and cars moving around. So I had to code up a separate traffic visualization layer that would show people moving around proportional to the amount of traffic there is. I wasn't sure if I would end up showing the right traffic is I generated people doing whole trips (my queuing theory is really bad), so instead I used the SimCity 4 trick of randomly generating people to walk around for short sections of road and then have them disappear again. I could then just periodically generate new people on sections of road that weren't showing enough traffic over time in their visualization. Surprisingly, even though my simulation was small enough that I could simulate the whole world, 60 times a second, I still ended up using my geometric series approach in both the traffic visualization and parts of the simulation. It worked really well!

By the end of the second day though, I had a hit a wall. I still couldn't figure out how to model city growth. I could simulate people in the city, but I couldn't figure out how to get new people to move in. I didn't want to explicitly encode a rule for having people automatically move into the city. Perhaps I could come up with some sort of hacky rule for when new residents would be induced into moving into the city. The new rule would likely still have an emergent behaviour of causing an implicit bias towards growth in the city, but if the rule still made thematic sense, then it would be more satisfying and could be tweaked and improved later on. I started leaning towards the idea of using jobs to induce people to move to the city. If there were companies looking to hire in the city, then people would move there. That mostly makes sense, and avoid explicitly biasing the city simulation towards growth.

I still had a bootstrapping problem though. The companies in a city won't hire people unless they have customers and are making money. But if there's no one living in a city, then companies will have no customers and hence have no jobs. I could make companies hire people even when they have no customers, or I could maybe implement a hack where companies might tentatively hire people to see if they can make money and then fire them if it doesn't work out. I think games like SimCity and Cities: Skylines have a hack where cities with small populations have an explicit macroeconomic boost to industrial jobs. If you zone some industrial areas, some companies will move in and create some factories to employ people even if they have no customers and no one lives in the city. This seemed like just another artificial bias towards growth, even if it was in a different form, so I wanted something different.

Instead, I went with a different cheat in that I created a type of building that was self-sufficient and could supply an initial boost of employment without depending on the existence of other people or infrastructure. I opted for subsistence farming plots. They could provide a minimal income even in a city with no other people or infrastructure, thereby attracting an population base. 100-150 years ago, the Americas were settled by offering free plots of farming land to immigrants, so it's not entirely unheard of, though I'm not sure how realistic that assumption would be now. Once the simulated city developed a sufficient population, there would be enough collective demand to make stores or small workshops profitable so they will employ people, resulting is a positive feedback loop of growth. This ends up supporting a theme that a city is dependent on investments of infrastructure to support certain types of economic activity and growth. Or maybe is says that people power a city but it requires infrastructure to improve efficiency and productivity to unlock that power. In any case, I think those are reasonable philosophies around which a city simulation can be designed. I'd be a little bit afraid of making the rules too deterministic so that it feels more like a game than a story-generating city simulator (e.g. you need electricity to have a factory over size 3, you need an outside road link to let your industrial population grow more than 3000 or stuff like that). And there's another a danger of inadvertently building an arbitrary civilization simulator instead (e.g. you need iron mines, coal mines, and an iron smelter to build the steam engine, which is then a prerequisite to industrial age buildings etc). But it does show that this philosophical approach is broad enough to capture many different city models.

On the third and final day, I polished up the UI and tweaked the city model a bit. Since there were only three different building types, and given the shortness of time, the ending city model was still very simple, but it seemed to work well enough, and it helped me work out a different way to simulate growth in a city builder game. Here's an overview of the final simulation code:
  1. People without a job will go to work at the building with the greatest demand for workers (demand must be at least one full worker). Farms always need one worker while stores need workers proportional to the number of visitors/customers they have
  2. People without a home will move to any home they can find
  3. People will move to a home that's closer to their work than their current home
  4. People will visit the closest store to their home to buy things, but if the number of visitors exceeds the store's capacity, the people will move to the next closest store (and so on)
  5. People without homes or jobs will leave the city
  6. People will cause 1 unit of traffic for each square of the street they need to walk on to get from their home to their work
  7. Any building that still has a demand for workers that can't be filled from the local population will hire someone from outside the city (provided that person can find housing)
  8. Every 100 turns, rent will be collected from each building's residents and workers. Upkeep costs for each building will also be deducted.
Here's the final game.

Tuesday, January 24, 2017

Trying Out Some Emscripten on Chrome

Omber is the GWT JavaScript app that I'm currently working on. It runs in a browser, and I've also created an Android version using Cordova. It has some computationally-intensive routines, so it's sometimes a little sluggish on cellphones, which is understandable given how under-powered cellphones are. I've been looking at whether there are ways to improve its performance.

The cellphone version of Omber runs on Chrome (specifically, the Crosswalk version of Chrome). It's unclear how to get optimal performance out of Chrome's V8 JavaScript engine. The Chrome developers talk a lot about how great its Turbofan optimizer is, but they never actually give any advice on how to write your code to get the best code generation from Turbofan. My code does a lot of floating point math, and I really need the numbers to be packed tightly to get the best performance out of the system. Should I be manually using Float64Arrays to do this? Or is V8's Turbofan smart enough to put them directly into objects? Are there ways I can add type hints to arrays and other methods? Can I reduce the number of array bounds checks? In a language like C++, I could simply write my code in a way that would produce the code generation that I wanted, but how do I guide Chrome into generating the code that I want?

Mozilla has their Emscripten project that can compile C++ to JavaScript asm.js style code. Firefox then has a special optimizer for translating JavaScript written in the asm.js style into highly optimized machine code. Personally, I think asm.js isn't a great idea. The asm.js subset is very limiting and sort of hackish. As far as I can tell, the code it produces is not very portable either. Basic things like memory alignment and endianness are ignored or simply handled poorly. For these reasons, most of the other browsers don't support asm.js-specific code optimization, but they claim that their optimizers are so good that their general optimization routines will still get good performance out of asm.js code.

So is it worth using Emscripten or not then? To try things out, I made a small test where I took my polygon simplification code and rewrote it in C++, compiled it using Emscripten to JavaScript, and compared the performance to my original GWT code. I was too lazy to record the actual numbers I was getting during my benchmarking runs, but here are the approximate numbers:

Original code on Chrome: ~280ms
Emscripten code on Chrome: ~230ms
Emscripten code with -O2 on Chrome: ~300ms
Original code on Firefox: ~4000ms
Emscripten code on Firefox: ~160ms
C++ code: ~150ms

Takeaways:


  • The Firefox code optimizer isn't very good, so having a special optimizer for asm.js is really useful for Firefox. Firefox was able to get performance that was pretty close to that of raw C++ code when dealing with asm.js code though.
  • The Chrome optimizer is so good that the performance of the normal JavaScript code is almost as good as the Emscripten code. In fact, it probably wasn't worthwhile rewriting everything in C++ because I could have probably gotten similar performance by trying to optimize my Java(Script) code more
  • Since the Chrome optimizer isn't specifically tuned for Emscripten code, the Emscripten code might actually result in worse performance than JavaScript depending on whether Turbofan is triggered properly or not. For example, compiling Emscripten code with more optimizations (i.e. -O2), actually resulted in worse performance from Chrome
I was a little worried that Chrome's V8 engine might be tuned differently on cellphones, meaning that I might not get similar performance numbers when running on a cellphone. So I also ran the benchmarks on Cordova:


Original code on Chrome: ~2600ms
Emscripten code on Chrome: ~1600ms
Emscripten code with -O2 on Chrome: ~2800ms

Here, we can see that the Turbofan optimizer is still triggered even on cellphones, and the resulting code performs much better than the original JavaScript code. The Turbofan optimizer still isn't reliable though, so you might actually get worse performance depending on the Emscripten code output.

I'll probably stick with the Emscripten version for now, but I'll later try to optimize my original JavaScript and see if I can get similar performance out of it. It would be nice if I could just link my C++ code directly with JavaScript, but Cordova doesn't allow this. In Cordova, all non-JavaScript code must be triggered asynchronously through messages, which isn't a good fit for my application. It might be possible to do something with Crosswalk, but it seems messy and I'm too lazy. 

Alternately, I could try using Firefox on cellphones since its optimizer can get performance that's near that of C++, but the embedding story is a little unclear. The Mozilla people abandoned support for embedding their Gecko browser engine, and they ceded that market entirely to Chrome/Blink. They now realized that it was a mistake and they're trying to get back in the game with their Positron project etc, but I think they've entirely missed the point. They're building an embedding API that's compatible with Chrome's CEF, but Chrome's CEF already works fine, so why would anyone want to use Mozilla's version? The space to play in is the mobile market. Instead of wasting time on FirefoxOS, they should have spent more time working on embedded Firefox for mobile apps. An embedded Firefox for iOS with a JavaScript precompiler would be really useful, and Mozilla could dominate that space. Well, whatever.