Monday, December 19, 2016

Going NOWHERE, Status Update December 2016

Development went quietly this year, so quiet in fact that it might have appeared to some that the project had died:

"I founded this 2 years ago and every once in a while i check the forum or I see a 'discount' in humble bundle. Just admit it: this project is dead. :-("

I wrote a longer reply to this forum post, which we thought might be of general interest to many of you, who are probably asking yourself the same thing:
I'm sorry you're seeing so little output from us as of late, but I can assure you that the project is far from dead. We have received additional funding that has permitted us to fully concentrate on our work and extend its scope (permitting for at least another 12 months of development), instead of doing more PR.

Do not give up on us! I understand that it is frustrating to not see much from the outside, but I've spent and am still spending a lot of time to nail down the technical side, and that part is not pretty, if not even outright depressing at times. No one wants a new alpha version more than me, but the necessary work is not done yet. You can absolutely call our development "troubled", it's not going as elegantly as other projects, it's almost embarrassing, but we're a 100% committed to it, and we'll get to great results in the end. This is not just a throwaway game, it's also an infrastructure that I want to see blossom and bear fruits when it's sufficiently progressed.

We have a new kind of world / rendering engine that can render thousands of CSG operations of implicit primitives in realtime, allowing for crazy geometric animations and procedural world editing. The rendering engine is running on a new kind of Scheme-like programming language that permits live compilation of generated programs (massively speeding up procedural generation of geometry, textures and music), and on top of that a - not yet rewritten - visual programming IDE that allows us to design AI behavior and procedural generators for geometry and music live - that is, while the game is running.

With more manpower and better funding, perhaps all this would be done sooner. But I'd also lose control of my ideas, and then we'd just look like everything else that's out right now, and I'd still have to try to make the game of my dreams.

As always, I'm occasionally updating my Devlog with what's going on. I hope I can soon work my way out of development hell, and get back to visible, and most importantly, playable results.

I probably can't make your worries go away, but you deserve to know at least what's going on.
That said, rather than bumping up the release date by another year, we're going to delay the release of NOWHERE indefinitely -- that is, the release date will be changed to TBA (when it's done). That's not very informative, and also not what we originally promised, but at least it's more accurate.

New alpha releases will arrive much sooner. In fact that is my primary goal. I'm aiming for a new release in the first week of March 2017. We originally promised alpha versions every two weeks, and I was able to keep that commitment for a time, but as goals grew and development became more convoluted, I couldn't think of a way to publish my development progress in playable form, aaand - two years went by. :-/

Thus we had to bite the bullet and reach a compromise on what to put in the alphas as long as the main game loop has not matured. The March 2017 alpha will come in the form of a basic "creative mode" using the new engine, with a built-in modding IDE to toy around with, and a written introduction that explains how to play in creative mode and how to use the IDE.

From that point on we will be able to do regular (ideally biweekly) updates to creative mode until the main mode has reached a sufficient number of mechanics permitting to play it from start to end, then both modes will be playable side by side.

Tuesday, March 15, 2016

NOWHERE Progress Report March 2016

Leonard wrote on the NOWHERE Steam Greenlight page about our current progress:

Hi, Leonard here. I'm sorry for the delays towards releasing the next alpha. I've been spending the past year in R&D hell developing technology that helps us getting the game to become as dynamic as we're envisioning it to be, and moving it sufficiently far away from anything that people can buy currently. That means even some of the basics are still missing, but our commitment to development is unbroken. I'm working nearly every waking minute on the game. If you are interested in the technical details, have a look at my frequently updated Devlog, or my twitter at @paniq. We also have an IRC channel where you can talk to us directly; irc.freenode.org, #duangle.


New Shadertoy: Light Propagation Volume https://www.shadertoy.com/view/XdtSRn

Wednesday, September 16, 2015

NOWHERE Development Log, Financials and The Future™


Screenshot of recent prototype work for Nowhere. Article on that coming soon.
Three months ago I started a development log for NOWHERE and its associated tooling in a github gist. It wasn't really supposed to be read by anyone but other developers, and was more meant as a self motivation exercise, but the convenience of updating gists resulted in this blog not getting updates for five months while the devlog saw a new entry nearly every week.

If you are interested in the shall we say microscopic aspects of development, check it out.

I should also mention that we launched a separate project for the programming language developed alongside our game Nowhere, aptly titled None. The release got some attention on Hacker News and Reddit. None will allow us and mods to visually design and compile procedural generators for art and sound to machine language and shaders on player's machines, while the game is running. Liminal is the complementary project, a game engine built on None. While Liminal is not ready for use, None has matured enough for experimentation. If you would like to support the development of these two pieces of technology, please consider becoming a Patreon. Patreon money directly pays for our rent, food and coffee.

In the entry for today, I also made some remarks about our current financial situation and the future of the project.

Now on to financial matters. It appears we are running out of money again. The long waiting time for new content has eroded the number of new founders, and one of our long time patreons had to reduce his monthly support because he's also in a financial pickle right now. We also used to borrow a small complementary amount of money from my mum each month and are about to reach the cap next month, so we will also run out of that funding pot by end of October. Sylvia is still waiting on payment from one of her commissions (this was before Sylvia began to require advances), but it is very likely they screwed us and we're not going to see a cent. There are currently no new commisions in sight for her, but of course that can change any day.

That means we have to think about how to get fresh funds before September runs out. I've considered starting a Patreon for None/game engine development exclusively, so we are at least getting support for that part of the work (which is not what Nowhere players are necessarily interested in, although they damn well should ;-)), but Patreon only allows one page per account, and managing that one account is already hard work for Sylvia, so we'll have to merge the tool development part with the page and make it about more things than one. We'll have to think about how to best present this.

We also need something new to show so we can justify doing a new video and invite everyone to check out the new alpha. Sylvia has done lots of new concept art that could go into a new video, we can finally exchange that old dreadful logo, and replace "Psychedelic RPG" with a much more fitting "Alien Life Simulator". The fundraising video is nearly 2 years old. Back then we gave away the end game without mentioning many of the intermediate steps, and I believe people would be more inclined to trust us if there were a more detailed plan of action, and we would be more clearly on our game being developed as a service, not as a one trick pony.

It also becomes clear that we will not be able to deliver on our full promise by the end of 2015, but I want to do a release anyway with what we get to work by that date. We don't want anyone to feel disappointed or exploited. The plan is to ship what's there, then continue on a "sequel" (which is really just a new major version) and keep our founders on board until they get what they paid for.

I can not believe how fresh our concept for Nowhere still is. The pitch has been out for several years and no one has stolen it from us, and I guess that's because it really is that ambitious and forward thinking. We have ideas for art, sound and gameplay design as well as programming, modding and player service which are quite different from how most game development is done today, and I would love to see all our goals realized in exactly the way we envision them.

I don't mind if it takes ten years to get there, but we need to compromise on the progress on the way and reintroduce regular releases of alphas, betas and "masters" to keep the company operational without incurring additional debt.

I wanted to hold off new alphas until we had a game model we could definitely stick to, but I see now that I have to get over my perfectionism and, as the mantra goes, release early, release often.
Oh man, look at this logo. So good.

Thursday, April 23, 2015

Towards Realtime Deformable Worlds: Why Tetrahedra Rule, Voxels Drool

In this article I provide a chronicle of our turbulent terrain development, what lessons I've learned and what's next. Also, why tetrahedra (probably) rule and voxels (mostly) drool.

It was nearly one and a half year ago that we showcased an early tech alpha of NOWHERE that had organically growing and sculptable terrain.

I had put high hopes in this approach, but found that, while triangle meshes allowed fine control over geometric features, they were difficult to simulate efficiently in a way that would keep the surface 2-manifold.

The last alpha version that we released took a few minutes of CPU time to grow a single coral like structure. The program takes several snapshots from the same mesh at different stages and exposes them as separately rotating triangle meshes.

Knowing that I wanted to generate worlds far larger than that, running out of ideas in what ways initial geometry could be seeded, and hitting upon other shortcomings of triangle meshes, I embarked on a long and perilous journey to explore the rabbit hole of alternate geometric representations.

But first, let us visit the ultimate (and retconned) list of requirements that I made for NOWHERE's terrain:

  • 360° degrees of freedom, modelling of minor planets in zero G
  • Ideally a single integrated solution for animated actors as well as terrain
  • Can be seeded from any implicit function (not necessarily just distance fields)
  • Support for smooth surfaces but also sharp features (e.g. a cylinder would have both)
  • Can model rocky terrain with many cavities
  • Able to model anisotropic structures like sticks and membranes
  • Can model contiguous, "never-ending" surfaces, but also separated, distinct structures like floating rocks
  • Can host cellular automata for non-scripted procedural growth of plants-like organisms and crystals
  • Can selectively host rigid and soft body physics simulations, and run classic bone animation
  • Can host a simulator for flowing substances like water, lava, plasma or goo
  • Persistently diggable, fillable, sculptable by players
  • Support for different sediments: as you dig in, you hit upon different layers of material
  • Supports classical mesh operations like extrusion, subdivision as well as boolean operators (better known as CSG modelling)
  • Supports seamless ornamental and player-authorable texturing, similar to Wang Tiles; Classical triplanar mapping is not enough.
  • Support for realtime ambient occlusion / radiosity
  • Scale independent, non-uniform distribution of detail
  • Support for seamless distance-dependent level of detail

Over the course of this project, whose beginnings started several years ago (back then, what the gameplay would ultimately be was completely unclear), I tried more than a handful of solutions. The first one was the most obvious choice:
You can clearly see what influenced it. But I wanted more freedom of expression. The next approach glued prefabricated models together, similar to how Besiege works:
While that covers scaffolding nicely, it's neither particularly organic nor terrain oriented, in fact it scaled rather badly, and would only allow for rather puny scenes.

Then I tried mixing scaffolding and raymarching distance fields to produce a huge, if only somewhat monotonic terrain that was unfortunately completely unalterable and ate too much GPU time to make it a comfortable solution for low end computers (We want to make people with weaker hardware happy too):
I had some luck with Wang Tiles earlier, so I thought a 3D version of that might be interesting to try:
This ended up being more of a one trick pony. At the time I entertained the thought that terrains could not be altered, but I could not make peace with it. On the way to figuring out how to make geometry user-editable, I experimented with procedural terrain growth by extrusions, the first triangle mesh based solution:
The pesky problem with extruding triangle meshes is that it is impossible to tell when surfaces intersect, which turns the mesh into garbage; Furthermore, joining intersecting meshes is everything but trivial. I started to look into scientific papers for solutions and was made aware of Stéphane Ginier's formidable SculptGL, which does Z-Brush/Sculptris-like sculpting with support for punching holes into the topology, a rarely supported technique. I wanted the same thing for our game, and that's where my first full-on foray into triangle mesh editing started (see first video).

So in the past year I looked into voxel based solutions again, particularly Dual Contouring, and I got some rather spectacular results out of it, in terms of being able to feed it with distance field functions, doing CSG on it, running light simulation, which is why I thought this would be the ultimate solution we'd end up with. The first octree-based solution ran on the old Python engine, and despite large parts being written in C, seeding even small volumes took a few seconds too long, and editing wasn't realtime enough. Here are some screenshots from that time. Sigh.
After a rewrite of our engine and experimenting with new editing paradigms in January this year, I had an insight for how to do, well, something with tetrahedral meshes, but it didn't quite click yet.

Instead, I got sidetracked into tetrahedra-based dual contouring and wrote a fast GPU-based implementation that used a grid instead of an octree, and experimented with alternate honeycombs for meshing. Things were great for a while. Descend your gaze upon this smooth animation of an implicit function:
I managed to integrate a realtime light propagation solution that ran on the same grid:
I made strides. I wrote a realtime meshing solution that meshes voxel data on the fly, completely alterable (although that's not visible in the video yet), realtime lit, at the cost of a heavily reduced draw range:

Alas, geometry representation on regular grids sucks for the same reason as shape representation in bitmaps, the 2D analog case, and I've ultimately decided to completely give up on a voxel based solution.

Why? Behold, the long shitlist of voxel-based data structures. If you're considering to write something with voxels, beware of the perils:

  • You have no semantic structure, unable to easily discern area, orientation, islands sharing attributes, proximity, related features over areas larger than the immediate 1-cell neighborhood, which always spans the minimally possible distance; Octrees can help caching some of this information, but they're not nearly as accurate as a graph-based representation.
  • Storage increases at the power of 3 (^2 in 2D). Every time your side lengths double, storage increases by a factor of 8. That means 1 GB of VRAM stores only a 1024^3 voxel cube at 1 byte per voxel; typically, when dual contouring and different materials are involved, a voxel costs at least 16 bytes, so a ~406^3 voxel cube is more realistic.
  • Rendering must be done in triangles, which requires a "mixdown" of the data, as the two structures do not store data in the same way. You keep the same information twice in memory. Also, you can not know the number of generated triangles in advance, neither can you update single triangles well (storage penalty), which makes compact allocations more difficult.
  • While detail is not taxed (in fact, it's prepaid ;-), it is capped at the nyquist frequency of <side length>; you can not have features smaller than that. 
  • On the other hand, large areas without topological features still store values at the full sample rate, which, for reasons explained, is a tremendous waste of space. 
  • Scaling / shearing / rotating grid data is lossy. Transformed voxels don't always fit back into individual cells due to the nyquist cap. Even a dual contouring grid can't guarantee that edges and vertices are always preserved. That means you're forced to keep moving objects separate from the world. 
  • Sparse octrees can help with culling rendundant spaces, but their expanse is isotropic; the volume of a pencil still requires breaking down resolution to the maximum hierarchy, despite less surface detail along the length of the pencil. 
  • Hierarchical access in an octree is limited to 10 levels with 32-bit addressing. That means if you want to address a space with a resolution higher than 1024^3, you will need 64-bit morton codes, which incurs a storage and compute penalty. 
  • All octrees are bounded. Exceeding the bounds means at best patching and at worst rebuilding the tree. Likewise, all grids are bounded. Resizing the grid can't be done with a simple memcpy. Also, all operations that could exceed the resolution must be checked. 
  • Neighborhood queries in octrees are a mess. Going up the hierarchy is manageable, going down requires extra data; explicit edges are not present. In short: due to hierarchical storage, octree cells are not really neighbors. 
  • Because of this, local transformations in an octree cause deep changes all over the hierarchy. 
  • Raycasting requires visiting each cell with a Bresenham-like algorithm; Octrees can help here, but can't help in the pencil case, when the ray is grazing the surface. 
  • Buckets can alleviate some Octree issues, but not ultimately fix them, and incur an added storage penalty. 
  • Mesh LOD techniques are inapplicable; LOD only works for dealing with situations where voxels become smaller than a pixel. Dual Contouring seems like it could apply, but it has no good LOD scheme, and nyquist is also forced to jump by a whole factor 2, regardless of topological detail. 
  • There is no popular voxel data format or voxel data editor; Most work that artists do these days is stored in meshes and textures, and it's impossible to import them without damaging the model's representation, let alone export them for backwards compatibility. The semantic information is destroyed. 
So I went to look into Tetrahedral meshes, also called "Finite Element Meshes", the solution I'm currently implementing. The idea is pretty straightforward: They work exactly like triangle meshes, except that the triangles are expanded by one dimension to form a volumetric mesh of tetrahedra, which tessellates space completely into solids and air. Tetrahedral meshes are largely undiscovered by the game development community (with two notable exceptions), but an old hat for companies doing structural analysis simulations (think hard core Bridge Builder) and interactive tissue surgery simulators.

Here's why I think that NOWHERE will fare much better using a tetrahedral mesh for its terrain, in comparison with the bullet list above:

  • A mesh is a graph and therefore nothing but semantic structure. Area, orientation, neighboring islands sharing attributes, proximity are easily discernible from the provided vertice-edge-face-cell structure. The immediate 1-cell neighborhood spans large volumes of space when topological detail is low, and small volumes where topological detail is high.
  • Storage increase is independent of space spanned, but depends only on topological complexity, so it's a little difficult to tell how much complexity this buys you. Assuming no vertices are shared, and only float positions are stored, 1 GB of VRAM covers about 29 million triangles, or 22 million tets. If those tets were stored at equal distance, they would compact to a ~281^3 voxel cube; but could span a distance only bounded by desired float precision.
  • Rendering must be done in triangles, which is easily achieved either by pruning the mesh for surface bounding faces only, or maintaining boundary faces in a separate array during mesh operations.
  • Topological detail is taxed, but independent of scale; Size of features is only bounded by floating point precision.
  • A large area without any variance in data can occupy as few tets as possible, providing an effective compression for regions with low entropy.
  • Scaling / shearing / rotating vertices is, apart from float precision issues, lossless. C1 continuous deformations do not alter topology. Transformations that do always cause at least one tet to invert, which can be detected and fixed with local remeshing. This only alters face and edge relationships, but not vertex positions.
  • All classical mesh animation techniques, like bone animation, still work here. Additionally, 3D cellular automata also operate on tetmeshes. Physics simulations of softbodies or fracturing volumes on tetmeshes are well documented.
  • Meshes can use highly anisotropic scales. The pencil example would perform quite well here, tessellating space with the lowest amount of elements required. Along the length of the pencil, no extra elements need to be added.
  • Access in a tetmesh is not explicitly hierarchical, but graph based. Each tet is a node with four neighbors (using a half-face structure), and most spatial queries are done by walking along tetfaces. The tetmesh acts as both volume data and accelleration structure. For everything else, non-exclusive ad-hoc BVH's can be constructed.
  • A Mesh is only bounded by its cells. If you need more space, add as many cells as required. This can be done locally and directed. It is also possible to maintain the mesh within the volume of a cube with near infinite side length.
  • Local transformations in a tetmesh are indeed only local to the hull of their connected neighborhood.
  • Raycasting is a simple neighborhood walk algorithm. In the pencil case, when the ray grazes the surface along its length, a few steps suffice to cover an enormous distance.
  • Mesh LOD techniques apply. Techniques exist to adaptively decimate meshes by storing edge collapse operations in a hierarchy (a so-called Half-Edge Tree).
  • Triangle mesh editing is the de-facto standard for models in games. Many formats and assets exist. Any 2-manifold triangle mesh can be tessellated into a tetmesh, and pruned back into a triangle mesh without any loss of information, so triangle mesh imports and exports can be easily supported.
This is the very last solution I'm trying, and the one we'll most likely stick with, which means that the gameplay of the next alpha will be close to Alpha 75 again, and then some.

The only regret I have is that I didn't know all this three years ago. It took a long time to look at approaches and figure out what the game needs. We could have settled for something way earlier had I been more willing to make compromises. But at least this way I can say we picked the best solution for our problem, and the game is truly becoming the kind of game we want to play.

Regarding tetrahedral meshes and related techniques, I'm keeping a lengthy list of relevant papers that may be of interest to anyone else interested in the subject here.

Tuesday, January 6, 2015

Conspire: A Programming Environment for NOWHERE

It's time for another NOWHERE tech write-up. I've been tweeting about my work on Twitter up to the point where I was nudged to write a longer blog post about what the hell I'm actually doing, so this is an attempt at doing just this. A chronological description of my trials and tribulations and where I finally ended up.

The importance of tooling can not be overstated. There are no tools out there for the kind of deeply procedural game we're working on, and good tooling comprises 90% of what makes the game, as nearly all of our content is procedural in one way or another, and not handmade. If there's currently a lack of procedural content out there, it's precisely because of the lack of tooling.

So I set out to construct an IDE in which assembling procedures in a maintainable way became easier. Inspired by UE4's Blueprints, I began with graph based editing as a guide, as graphs make it relatively easy to describe procedural flow. As a warm-up, I wrote two tiny C libraries: a theming library based on Blender's UI style, and a low level semi-immediate UI library that covers the task of layouting and processing widget trees.

The IDE, dubbed Noodles, was written on top of the fabled 500k big LuaJIT. The result looked like this:

Demonstrating compaction of node selections into subnodes, ad absurdum ;-)
A back-end compiler would translate these graph nodes back to Lua code to keep execution reasonably efficient, and I added support for GLSL code generation, something I've been planning to do from the beginning. I found that the ability to cover different targets (dynamic programming, static CPU, GPU pipelines) with a single interface paradigm became somewhat of a priority.

A simple GLSL shader in nodes, with output visible in the background.
The workflow was pretty neat for high level processing, but working with the mouse wasn't fast enough to construct low level code from scratch - refactoring was way easier though.

Noodles, shortly before I simplified the concept. Live-editing the OpenGL code for a cube rotating in the background.
I still didn't have much of an idea what the semantics of programming with nodes were going to be. I felt that the system should be able to analyze and modify itself, but a few design issues cropped up. The existing data model was already three times more complex than it needed to be. The file format was kept in text form to make diffing possible, the clipboard also dealt with nodes in text form, but the structure was too bloated to make manual editing feasible. The fundament was too big, and it had to become lighter before I felt ready to work on more advanced features.

At this point, I didn't know much about building languages and compilers. I researched what kind of existing programming languages were structurally compatible with noodles, and data flow programming in general. They should be completely data flow oriented, therefore of a functional nature. The AST must be simple enough to make runtime analysis and generation of code possible. The system must work without a graphical representation, and be ubiquitous enough to retarget it for many different domain specific graphs.

It turned out the answer had been there all along. Since 1958, to be exact.

Or 1984, if we start with SICP. Apparently everyone but me has been in CS courses, and knows this book and the fabled eval-apply duality. I never got in contact with Lisp or Scheme early on, something that I would consider my biggest mistake in my professional career. There are two XKCD comics that are relevant to my discovery here:


Did you know the first application of Lisp was AI programming? A language that consists almost exclusively out of procedures instead of data structures. I had the intuitive feeling that I had found exactly the right level of abstraction for the kind of problems we are and will be dealing with in our game.

My first step was changing the computational model to a simple tree-based processing of nodes. Here's the flow graph for a fold/reduce function:

Disassemble a list, route out processing to another function, then reassemble the list
I figured out a way to do first-order functions in a graph, and did a little demonstrative graphic about it.
Click for a bigger picture
While these representations are informative to look at, they're neither particulary dense nor easy to construct, even with an auto-complete context box at your disposal. You're also required to manually layout the tree as you build it; while relaxing, this necessity is not particularly productive.

It became clear that the graph could be compacted where relationships were trivial (that is: tree-like), in the way Scheme Bricks does it:

Not beautiful, but an interesting way to compact the tree
And then it hit me: what if the editor was built from the grounds up with Lispy principles: the simplest graphically based visualization possible, extensible from within the editor, so that the editor would factually become an editor-editor, an idea I've been pursuing in library projects like Datenwerk and Soil. Work on Noodles ended and Noodles was salvaged for parts to put into the next editor, named Conspire.

A very early screenshot. Atoms are rendered as widgets, lists are turned into layout containers with varying orientation. 
At its heart, Conspire is a minimal single-document editor for a simplified S-expression tree that only knows four immutable data types: lists (implemented as Lua tables), symbols (mapped to Lua strings) , strings (a Lua string with a prefix to distinguish it from symbols) and numbers (mapped to the native Lua datatype).

By default, Conspire maps editing to a typical text editing workflow with an undo/redo stack and all the familiar shortcuts, but the data only exists as text when saved to disk. The model is an AST tree; the view is one of a text editor.

Rainbow parentheses make editing and reading nested structures easier.
Conspire can be extended to recognize certain expressions in the same way (define-syntax) works in Scheme, and style these expressions to display different controls or data:

A numerical expression is rendered as a dragable slider that alters the wrapped number.
In the example above, the expression (ui-slider (:step 100 :range (0 1000) <value>) is rendered as a slider widget that, when dragged with the mouse, alters the <value> slot in the AST tree the view represents. The operations are committed to the undo buffer the same way any other editing operation would.

Using this principle, the editor can be gradually expanded with more editing features. One of the first things that I added was the ability to nest editors within each other. The editor's root document then acts as the first and fundamental view, bootstrapping all other contexts into place:

The root document with unstyled markup. csp-edit declares a new nested editor.
Hitting the F2 key, which toggles styling, we immediately get the interpreted version of the markup document above. The referenced documents are loaded into the sub-editors, with their own undo stack:

Tab and Shift+Tab switch between documents.
New views and controllers such as a Noodles-like graph editor could be implemented as additional AST transformers, allowing the user to shape the editor into whatever it needs to be for the tasks at hand, which in our case will be game programming.

The idea here is that language and editor become a harmoniously inter-operating unit. I'm comparing Conspire to a headless web browser where HTML, CSS, Javascript have all been replaced with their S-Expression-based equivalents, so all syntax trees are compatible to each other.

I've recently integrated the Terra low-level extensions for Lua and am now working on a way to seamlessly mix interpreted code with LLVM compiled instructions so the graphics pipeline can completely run through Conspire, be scripted even while it is running and yet keep a C-like performance level. Without the wonders of Scheme, all these ideas would have been unthinkable for the timeframe we're covering.

Oh, and there's of course one more advantage of writing your editor from scratch: it runs in a virtual reality environment out of the box.

Conspire running on a virtual hemispheric screen on the Oculus Rift DK2



Sunday, November 23, 2014

NOWHERE Progress Report November 2014



Leonard wrote in the Nowherian forum about our current progress with NOWHERE:

Right now is, hands down, the worst part of development. There are no fancy graphics to show off, no intricate gameplay, no surprising AI, no badass music, just unglamorous system design that interests no one so we can get all the aforementioned stuff in a manageable form that doesn't keep becoming a sluggish and unserviceable mess. 

Sunday, July 20, 2014

NOWHERE Progress Report: It's Going Great & Terrible At The Same Time


This post is partly an explanation of what's currently going on, partly an attempt to summarize the situation for ourselves.

First, the bad news.

With the final release of NOWHERE scheduled for the end of 2015, we're currently about 30% into development, and tech wise, it's going great. As you know our goal is to be 99% asset free, that is: all assets are generated on the players computer, for the players world, and the player will also have a chance to guide this process in the game. I've finished prototyping and embedding the meshing & landscaping tech, wrote a new procedural audio engine, and got the procedural model generator to productive speeds, on which I'll write more at a later date.

We solved some tough design issues concerning Nowherian world structure, body physique and society building, although it still all exists only in thoughts and paper, and none of that is implemented yet; Among improving our procedural authoring tools, I'm currently in the process of laying the foundation for world persistence, which is a demanding challenge. You can track my progress online at our open source repositories. I can't wait to finally work more on the actual content.

I'm sorry, I messed up, that were the good news! So, now the bad news:

We have not only been greenlit on Steam (so could theoretically release NOWHERE as Early Access game any day), but have also been accepted to talk about NOWHERE at the GDC Europe Innovative Games Showcase in August (out of what I imagine to be hundreds of applicants).

Wait, that's actually good news again. Here's the bad news. For real now.

In light of the rising complaints about Early Access games being released too early, we wanted the next alpha release to make a good impression, and so we overran our deadlines numerous times to get to a point where the game would be presentable enough for an Early Access crowd. (And we do need that crowd. The founding campaign on our own website isn't nearly attracting as many supporters as we need to cover funding for the complete development time.)

The result is that we're broke, phenomenally so. The Humble Store revenues for this month wouldn't cover our expenses sufficiently; Sylvia's dad borrowed us €1k to cover for this month but the situation is repeating. This month we're only getting $250 in revenues, but we need about $2k to cover rent, utility, food, etc.

The original planning was that a Steam release this month would give us sufficient revenues in writing (Both Humble and Steam pay out revenue with one month delay, e.g. this month we've made about $360, which are only due for payout for the end of August), so we could borrow a little more knowing that we'd be able to pay it back soon enough, and that the risk would be minimal. Alas, I can't seem to find an end for this alpha just yet, at least not one that would attract enough new interest. We gambled too hard.

I admit, we're really bad at advertising for ourselves. Talk is cheap and people want to see results, which is why I dug deep into what I love to do (writing sweet sweet game tech), and avoided doing anything that would not further development directly, such as video promotion, more interviews, and so on. This was probably not a good strategy. Now we need to find a way to fix this.

I would like to repeat that we are not ever going to give up. We're agile enough to deal with setbacks, and we own 100% of our project. This game is going to get made, whatever it takes. This is the work of a lifetime, and there is no other project we'd rather work on.

If you would like to help us out financially, and you feel you can afford to spare a small contribution on a monthly basis, please have a look at our Patreon page. Patreon contributions reach us sooner than any other revenue source.

Our fundraiser is of course also still running.

We would also again thank all our founders and supporters for your trust and contributions, you're making this project possible, and you deserve to see an excellent outcome.

Update: my amazing mom read our blog article and, since my birthday is coming up,  spontaneously decided to send us a little "birthday present" (and offered emergency loans in case this happens again), so August is saved. You all have moved heaven and earth in the past days to get us back on track too, and it worked! September appears almost covered now as well. You all are incredible and we are lucky to have such strong support! I'm back on working on the next release, and we'll be able to do one or two more alphas for founders before the Steam Early Access release. We're aiming for early August. Let's hope the trip goes a bit smoother from here on.