# Introduction

In the last article we’ve seen how to draw objects on the screen. Now let’s do something completely different. Let’s take our spaceship and move it around using the keyboard. It’s a fairly simple task but, again, I feel like I have to explain some background first. And it’s important background, since to do input, we need to know what kinds of callback mechanisms there are in C++ and we need to know some things about binding.

# Input basics

## Active and passive event retrieval

There are two approaches to get keyboard events (or any other type of event), one approach is “passive”, the other one is “active”. The active approach would be to ask if there’s a keyboard event and then do something with it, as in:

sge::input::keyboard::key_event e = keyboard_collector.get_event();
std::cout << "The key " << e.key_code() << " was pressed\n";


Technically, you have another pair of choices here: If there’s no keyboard_event currently in the event queue, you could either wait for an event to arrive or you could signal an error. But this discussion will be the subject of a later article.

In the passive approach, you don’t ask for a keyboard event. Instead, you register a callback function which is called whenever there’s an event. If you want to output all the key codes for the keys pressed, you’d write it like this:

namespace
{
void output_keycode(sge::input::keyboard::key_event const &e)
{
std::cout << "The key " << e.key_code() << " was pressed\n";
}
}

int main()
{
// Initialization stuff here
fcppt::signal::scoped_connection connection(
sys.keyboard_collector().key_callback(
&output_keycode));
// Stuff omitted here
while (running)
{
sys.window().dispatch();
}
}


Ok, the code looks longer and more complicated, but you don’t have to understand all of it, yet. With the key_callback function of the keyboard collector, we tell the input system: call output_keycode(e) whenever you get a key event e. In case you are wondering when the input system itself checks for new key events — that’s the ominous dispatch function which has been lurking in our main loop since the first article.

## Function pointer “objects”

When we write &output_keycode, we’re creating a “function pointer”. I’ll explain function pointers a bit differently than you normally read it in books and tutorials: Think of the line &output_keycode as a call to the function named &, passing it the function output_keycode. So it’s more like:

get_function_pointer(output_keycode)


This function get_function_pointer (or the & operator, respectively) returns an object. You can pass this object around, store it in a variable, just like you would with an ordinary int:

void (*myfn)(sge::input::keyboard::key_event const &) = &output_keycode;
myfn = &some_different_function;


Granted, the syntax is a bit eerie, but the code compiles just fine, as long as some_different_function is a function returning void and getting a single parameter of type sge::input::keyboard::key_event const &

But this function pointer object is a special object, because it has a “function call operator”. The code above can be continued:

// Construct an "empty" key event, just for exposition
sge::input::keyboard::key_event e;
myfn(e);


Here, the object’s operator() is invoked, which itself invokes the function output_keycode, passing it e. Objects having at least one operator() are called functors, since they resemble ordinary functions1. Functors have a signature for each operator() they posess. With these new terms, key_callback is a function which takes an arbitrary functor with the signature void (sge::input::keyboard::event const &). Signatures are written just like you would declare a corresponding function, but without the name. We’ll get back to that in a bit.

## Function pointer limitations

Now that we know about function pointers and key_callback, back to our problem: let’s write a function that gets as parameters a key event and our spaceship sprite and which moves the sprite around if w,s,a or d are called:

namespace
{
void move(sprite_object &spaceship,sge::input::keyboard::key_event const &key)
{
// The ship should only move if the corresponding key was pressed, not released
if (!key.pressed())
return;

switch (key.key_code())
{
case sge::input::keyboard::key_code::w:
// move the spaceship 10 pixels upwards
spaceship.pos(
// Note that 0 and 10 are of type "int" and we chose our coordinates to
// be int (type_choices, remember?) so we can use those literals without
// casting them to "point::value_type" first
spaceship.pos() + sprite_object::point(0,10));
break;
// handle s, a and d accordingly
}
}
}

int main()
{
// Initialization stuff here
fcppt::signal::scoped_connection connection(
sys.keyboard_collector().key_callback(
&move, ... oh wait


Do you see the problem which occurs now? The &move functor has an operator() with the signature

void (sprite_object &,sge::input::keyboard::key_event const &)

but key_callback expects a functor with the signature

void (sge::input::keybaord::key_event const &)

But how do we pass it the spaceship, which is defined inside our main? It’s not a global variable (and it shouldn’t be) so move has no access to it. As a side note: the spaceship shouldn’t move by a specified amount of pixels each keypress. It should begin to move when you press the key and should stop moving when you release it. But let’s tackle one problem at a time.

To pass the spaceship to move, we need to “glue” the spaceship variable and the move function together and pass the resulting “package” to the key_callback function, which then fills in the key_event parameter when an event occurs. To do that, I’ll introduce an extra class:

class spaceship_move_connection
{
public:
spaceship_move_connection(
sge::sprite::object &_spaceship)
:
spaceship_(
_spaceship)
{
}

// Nice, our very own function call operator!
void operator()(sge::input::keyboard::key_event const &e)
{
move(spaceship_,e);
}
private:
sge::sprite::object &spaceship_;
};

int main()
{
// Initialization stuff here
fcppt::signal::scoped_connection connection(
sys.keyboard_collector.key_callback(
spaceship_move_connection(
spaceship)));
}


Now the signatures of spaceship_move_connection and the parameter of key_callback match and the code runs just fine. You can imagine that you can use this “helper class method” to glue together member functions of a class and an instance of a class (again, by storing a reference to the class and calling the member function in the operator()).

To make our lives easier, boost provides a wrapper which generates the helper classes automatically. It’s called boost::bind and it’s used as follows:

#include <boost/bind.hpp>
#include <boost/ref.hpp>

int main()
{
// Initialization stuff here
fcppt::signal::scoped_connection connection(
sys.keyboard_collector.key_callback(
boost::bind(
&move,
boost::ref(spaceship),
_1)));
}


Looks pretty straightforward, doesn’t it? We give it the move functor, then a reference to the spaceship and leave the last parameter “open” so key_callback can “fill” it. If you leave out the boost::ref, bind will create a helper class containing a copy of the sprite_object instead of storing just a reference. The _1 is some static object defined in the global namespace and serves as a placeholder.

# Signals

This still leaves a few questions unanswered. For one: What’s this fcppt::signal::scoped_connection doing there? And how can key_callback accept arbitrary functors? What if I want to do that, too?

Well, since this is so important, we’re going to implement our own input system! This new input system will not deliver raw keystrokes but “actions” like “move left”, “move right” etc.. We’ll define a map from input::keyboard::key_code to action and translate the keys with that. This map could be read from a configuration file, enabling the users to define their own key bindings — something which is found in most games. But, since parsing configuration files is not the topic of this article, we will hard code the key bindings for now. Let’s look at the code. Take your time to digest it, since it’s a bit longer than the other code snippets:

#include <sge/input/keyboard/device.hpp>
#include <sge/input/keyboard/key_code.hpp>
#include <fcppt/signal/object.hpp>
#include <fcppt/signal/auto_connection.hpp>
#include <fcppt/signal/scoped_connection.hpp>
#include <boost/bind.hpp>
#include <boost/function.hpp>
#include <map>

// Internal stuff goes to the anonymous namespace
namespace
{

namespace action
{
enum type
{
move_left,
move_right,
move_up,
move_down
}
}

class action_emitter
{
// For better readability, I'll put the member variable
// declarations and typedefs here.
private:
// This is the type of function we accept from the user.
// The boolean indicates if the action is "on" or "off".
typedef void callback_type(action::type,bool);

fcppt::signal::object<callback_type> signal_;
fcppt::signal::scoped_connection key_connection_;
std::map<sge::input::keyboard::key_code::type,action::type> key_to_action_;
public:
// We take a keyboard::device as a parameter. Note that "device" is just an
// interface name. We can pass our "keyboard_collector" from the second
// article here, too.
action_emitter(
sge::input::keyboard::device &keyboard)
:
// No parameters for the signal
signal_(),
key_connection_(
keyboard.key_callback(
// Our old friend, boost::bind. Here, we bind the member function
// "key_callback" to the key signal
boost::bind(
&action_emitter::key_callback,
this,
_1)))
{
// Excercise: Read this from file
key_to_action_[sge::input::keyboard::key_code::left] = action::move_left;
key_to_action_[sge::input::keyboard::key_code::right] = action::move_right;
key_to_action_[sge::input::keyboard::key_code::up] = action::move_up;
key_to_action_[sge::input::keyboard::key_code::down] = action::move_down;
}

// This is the function the user calls to register a callback
fcppt::signal::auto_connection
action_callback(
// Here's something new...boost::function. See the explanation below
boost::function<callback_type> const &f)
{
return signal_.connect(f);
}
private:
// This can be private, no one should call it from the outside
void
key_callback(
sge::input::keyboard::key_event const &e)
{
// No key mapping found, too bad.
if (key_to_action_.find(e.key_code()) == key_to_action_.end())
return;

signal_(
key_to_action_[e.key_code()],
e.pressed());
}
};
}


This is, again, a piece of code that might puzzle you in more than one place. First of all, we see two enumerations here: action::type and sge::input::keycode::key_code::type (damn, a lot of namespaces!). You probably wonder why those have an extra namespace and are both called “type”. This is a workaround for a nuisance in C++: enumerations don’t start a new namespace! This code will fail to compile:

enum action { left,right,up,down };

void take_action(action a)
{
switch (a)
{
case action::left:
// ...
break;
}
}


…simply because “left” is not in the namespace “action” but (in this case) in the global namespace. You’d have to write case left which is…well, “namespace pollution”.

As you can see, we define a signal called signal_ in our action emitter. As template parameter, the signal gets a function signature which tells you what kind of functors can connect to the signal. In our example, we are ready to accept any functor that we give an action. Since we need this function signature in two places, we typedef it (the syntax, again, looks a little strange at first, but it’s the same as defining a function called “callback_type”). We could also have written:

fcppt::signal::object<void (action::type)> signal_;


Moving on, the signal would scream “ACTION EVENT!!?!!?!” and no one would hear it, weren’t it for the action_callback function which connects a functor to the signal. In the signature of this function lies the answer to the question of how to accept arbitrary functors with the correct signature. We use a class called boost::function. Like signal, boost::function gets as template parameter the signature of the functors to accept (we re-use our typedef here). Don’t ask how function works, though, it’s just magic. 😉

Interestingly enough, the connect function has a return value, a connection object. This is basically an object doing bookkeeping. The following code assumes that we don’t need a connection object. It should give you an idea of why it’s important:

// A pretty lame functor
class printer
{
public:
void operator()() const
{
std::cout << "Hello!\n";
}
};

int main()
{
// Define a signal with a pretty lame signature
fcppt::signal::object<void ()> my_signal;

{
printer p;
my_signal.connect(&p);
// Works just fine...
my_signal();
}

// Shit, the printer object is dead now, this will fail!
my_signal();
}


We create a signal and attach a temporary object to it with connect. After the printer object’s destruction, we call the signal again. my_signal, however, cannot know that the printer is dead by now. It’s still in the list of connections, so the code above will give a segmentation fault (or something similar). To remedy the situation, we “track” the functor using the connection object, which, on destruction, will un-register the functor.

The astute reader2 might have noticed that there are two connection types, auto_connection and scoped_connection. Don’t worry about that for now, it has something to do with object ownership, a topic for later articles.

# Getting the ship moving

## Doing it almost right

The last thing we need to do is get the ship moving — but this time, it should move as long a key is pressed and not by a fixed amount of pixels each keypress. But to do that, we need to create a structure holding some state variables for the currently pressed keys (or better, the currently active actions). The structure should also make use of our action_emitter.

Let’s be object-oriented and create a class for our entire spaceship which contains the action_emitter. This way, we can create more than one spaceship, each with a different keyboard::device so you can implement a multiplayer mode featuring two keyboards:

// Some includes omitted here
#include <set>
#include <boost/foreach.hpp>

class ship
{
private:
action_emitter emitter_;
// Connection to the emitter
fcppt::signal::scoped_connection action_connection_;
// To make the ship as self-sufficient as possible, we have to put the ship's
// sprite in here, too.
sprite_object sprite_;
// Where the ship is heading
sprite_object::point direction_;

typedef
std::set<action::type>
action_is_active;

action_is_active active_actions_;
public:
ship(
sge::input::keyboard::device &keyboard,
// I didn't want to repeat the sprite initialization code, so I use it as a
// parameter.
sprite_object const &_sprite)
:
emitter_(keyboard),
action_connection_(
emitter_.action_callback(
boost::bind(
&ship::action_callback,
this,
_1))),
sprite_(_sprite),
// Initially, we're going in no direction at all
direction_(sprite_object::point::null()),
active_actions_()
{
}

void
update()
{
sprite_.pos(
sprite_.pos() + direction_);
}

// To render the sprite using the sprite system,
sprite_object const &
sprite() const
{
return sprite_;
}
private:
void
action_callback(
action::type const new_action,
bool const is_on)
{
// Update action cache
if (is_on)
active_actions_.insert(new_action);
else
active_actions_.erase(new_action);

// Update direction loop
direction_ = sprite_object::point::null();
// Meet the neat foreach macro which iterates through all active actions
// using the "current_action" variable
BOOST_FOREACH(
action_type::type const current_action,
active_actions_)
{
// This is inaccurate, actually. It introduces the famous
// sqrt(2) bug, but we don't care for now.
switch (current_action)
{
case action::move_left: direction_.x() -= 10; break;
case action::move_right: direction_.x() += 10; break;
case action::move_up: direction_.y() -= 10; break;
case action::move_down: direction_.y() += 10; break;
}
}
}
};


So far, so good. We’ll call update in our main loop and use the sprite getter function and pass it to the sprite system. There is a tiny catch, however. When you run this code and press, the left arrow key, for instance, the ship will vanish!

The problem is that update is called each frame, which means that each frame the ship is moved by one pixel. Since we’re only rendering one sprite as of yet, you’re probably getting about 1000 frames per second3. It would be great if the sprite velocity would be frame-rate–independent. This is only possible if we take the elapsed time into account.

## Tempus fugit

Instead of saying “move the ship by 10 pixels a frame” we would like to express “move the ship by 10 pixels a second”. To achieve that, we measure how long each frame takes — let’s say in milliseconds. Denote this time span by ‘d’. By stating:

position += d/1000.0 * point(10.0,10.0)


We now move by 10 pixels each frame — for if ‘d’ is 1 for 1 frame, we get 10, if ‘d’ is 0.5 for 2 frames, we still get 10 and so forth (proof by example, I usually hate that).

To measure the duration of each frame, sge provides some helper functions. Most importantly, there’s sge::time::timer. A timer has an associated duration. You can activate the timer and ask it how much of this duration (in percent) has passed yet. A timer can expire, however, so we need to reset it after asking it how much time has passed.

There’s another complication: As you might recall, we’ve chosen to use “int” for the sprite positions. But the time span between two frames might be infinitesimally small. So we need to switch to a different type. But it wouldn’t be wise to use floating coordinates everywhere. So in the following code, we’ll switch to float only when it’s neccessary:

#include <sge/time/timer.hpp>
#include <sge/time/funit.hpp>
#include <sge/time/second.hpp>
#include <fcppt/math/vector/static.hpp>
#include <fcppt/math/vector/basic_impl.hpp>
#include <fcppt/math/vector/arithmetic.hpp>
#include <fcppt/math/vector/structure_cast.hpp>

class ship
{
private:
// Our floating vector type for the calculations
typedef
fcppt::math::vector::static_<float,2>::type
float_vector;

// direction_ is now a floating point vector.
// We need to store the position as float, too
float_vector direction_;
float_vector position_;

// ...

sge::time::timer timer_;
public:
ship(
sge::input::keyboard::device &keyboard,
sprite_object const &_sprite)
:
// ...
direction_(
float_vector::null()),
position_(
// structure_cast => switch from sprite vector to float_vector
fcppt::math::vector::structure_cast<float_vector>(
sprite_.pos())),
timer_(
sge::time::second(1))
{
}

void
update()
{
// funit for "floating time unit". elapsed_frames will return
// how much of the specified duration has elapsed. It'll
// be 1.0 when one second is over and 0.5 when half a second
// is over and so on.
sge::time::funit const delta = timer_.elapsed_frames();
timer_.reset();

position_ += static_cast<float>(delta) * direction_;
sprite_.pos(
// Now switch to sprite vector
fcppt::math::vector::structure_cast<sprite_object::point>(
position_));
}

// ...
};


And that’s it! Our ship moves by 10 pixels a second (pretty slow, you might want to increase the speed a little).

This article was a bit longer and it had no pictures, but I hope it wasn’t too boring. In the next tutorial, we’ll getting some structure into the game — using nice C++ idioms, of course.

# Footnotes

1 Note that this has nothing do to with functors in category theory (or the Functor typeclass in Haskell, respectively).
2 I always wanted to say this.
3 This “frame overproduction” is pretty annoying and we’ll fix it in a later article.

Posted in Uncategorized | Tagged , , , , , , | 1 Comment

# Introduction

Note: There’s a git repository at https://github.com/Phillemann/sgetutorial which includes limited installation instructions for the tutorial files as well as sge, fcppt and so on. Enjoy!

In the last tutorial, we set up a little framework to build our game upon. We only created a window which can be closed via the “escape” key. What we’re going to do now is add a spaceship – nothing more. But in the process, we’re going to discuss how to draw and manipulate arbitrary 2D objects (also called sprites) in sge. I’m first going to explain two concepts: sprites and atlasing.

Fig. 1: Various Sprites, top left: A square, axis-aligned sprite with a texture, but no transparency, top right: a rotated rectangle with a texture but no transparency, bottom left: a square axis-aligned sprite with a color but no texture, bottom right: the same as top right but with an alpha channel (transparency)

So what are sprites exactly? For us, a sprite is just a rectangle. This rectangle might be rotated, it might have a texture or it might be invisible. This doesn’t seem to bear much “substance”, but most 2D games consist of nothing but textured rectangles, so having an engine which supports them in an elegant and performant way is extremely important.

# Performance

## A simple analysis

But those are just rectangles, why care much about performance? Today’s graphics cards can render up to a gazillion of them at once! Correct, but this only applies to static geometry. Sprites are mostly dynamic, changing each frame (think of the player/enemies/projectiles/debris moving and/or rotating). This means that each frame, you have to update most of their data – which is done on the CPU – and send the new data to the GPU. Let’s see how much data we’re pumping to the GPU each second. Since a rectangle is represented as two triangles, we have 6 vertices per sprite. Each vertex has a color (4 bytes), a position (3 floats), and a texture coordinate (2 floats). Assuming you have 10.000 sprites, you get2:

$10.000 \cdot (4 \textrm{ bytes} + (3 + 2) \cdot 4\textrm{ bytes}) \cdot 6 \textrm{ vertices} \cdot 60 \textrm{fps} \approx 70 \textrm{MiB/s}$

This is quite a lot, considering we’re only dealing with rectangles here. It would be good if we could specify exactly which attributes we want for a group of sprites. This way, if we never change a sprite’s color, for example, we save 4 bytes of traffic each sprite, each frame.

## The woes of transparency

There are at least two other performance-degrading factors which come into play: texture switches and the number of render calls. You might think that the sprite rendering function looks something like this:

foreach (texture in registered_sprite_textures)
{
renderer.activate_texture(texture);
draw_all_sprites_with_texture(texture);
}


which would mean that we have ‘n’ render calls for ‘n’ textures. But it’s a little bit more complicated. The problem is that if you use transparency, you cannot just draw all the sprites at once, because drawing two objects isn’t a commutative operation anymore. You have to sort the sprites based on their depth (z coordinate), see [1], [2] and [3] for more information. Luckily, that’s also managed by sge, but you have to at least be aware of it to use sge::sprite best.

## Atlasing

Now to texture switches and atlasing: First of all, in all graphic APIs you have at least one “active” texture. This is the texture that’s used when you render a textured rectangle, for example.1 If you want to draw three sprites with different textures, you have to switch textures thrice. Now, texture switching is an expensive operation which we want to avoid. One obvious solution would be to cram all the smaller textures into one bigger texture and then only use that texture. This technique is called atlasing. See figure 2 for a depiction.

Figure 2: Demonstration of atlasing. The dashed lines represent textures, the other images are the sprites that use the texture.

For our game, however, we won’t be using atlasing, but as you’ll see later you still need to know that it’s there.

## Choices

Okay, so now, finally, we can get to the code which utilizes sge::sprite. Before we can instantiate our sprites, we have to define a few types. At the most elementary level, we have to decide which integer and float type we want to use and also which color format (if we use colors, that is). This gives us another degree of freedom, since we might not be satisfied with, say, integer coordinates or float precision. The three choices are aggregated in the type_choices structure:

#include <sge/sprite/sprite.hpp>
#include <sge/image/color/rgba8_format.hpp>

typedef
sge::sprite::type_choices
<
int,
float,
sge::image::color::rgba8_format
>
sprite_type_choices;


As I said above, we would be lucky if we could decide which attributes (color, texture, …) our sprites have – and indeed we can! The next typedef defines exactly what a sprite contains:

#include <boost/mpl/vector.hpp>

typedef
sge::sprite::choices
<
sprite_type_choices,
boost::mpl::vector
<
sge::sprite::with_dim,
sge::sprite::with_color,
sge::sprite::with_texture,
sge::sprite::with_rotation
>
>
sprite_choices;


Don’t be puzzled by the boost::mpl stuff in the code. We can ignore that for now and just say that an mpl::vector is able to somehow aggregate arbitrary types, like the with_ types above. As you can see, we’ll be using colors, textures and rotations for our sprites. But what about that with_dim thingy, don’t all sprites have a dimension? Well, no. sge::sprite also supports so called point sprites which are not rectangles but squares. We’ll get to that topic later.

In the meantime, if you’re curious as to what other choices you have, here’s an exhaustive table of all sprite attributes:

Type Description
with_color self-explanatory
with_depth Adds a z coordinate to the sprite (available via sprite.z()).
with_dim This sprite has a dimension (available via sprite.size()).
with_repetition The sprite’s texture can be repeated (tiled) (available via sprite.repetition()).
with_rotation_center When you set a sprite’s rotation, the rectangle is rotated around its center. With this, you can change the rotation pivot (available via sprite.rotation_center()).
with_rotation self-explanatory
with_texture_coordinates Gives you the ability to change the texture coordinates, which are usually (0,0), (1,0), (0,1), (1,1) for the four vertices (and a little different if you use repetition)
with_texture self-explanatory
with_unspecified_dim Reserved for point-sprite (see below)
with_visibility Enables you to make a sprite invisible via sprite.visible(false). Invisible sprites aren’t sent to the GPU, of course.
intrusive::tag This takes a bit longer to explain, see below.

## The most important typedefs

We’re almost finished typedeffing stuff. Two things are missing: The actual sprite type and the sprite system type. The sprite system is the structure responsible for rendering and caching stuff. It contains the vertex buffer and the index buffer and is able to reuse buffers across render calls to save performance. The sprites act more like a container and have virtually no inherent logic. The definition is simple3:

typedef
sge::sprite::object<choices>
sprite_object;

// You can ignore the "::type" at the end for now, I'll explain metafunctions in a later article.
typedef
sge::sprite::system<choices>::type
sprite_system;


## Creating a sprite

Finally, let’s create a sprite. Let’s say it should be positioned in the center of the screen and have a texture which is stored in a file called “ship.png”. The sprite’s size should correspond to the size of the texture and the color should be white. Voila:

#include <sge/texture/texture.hpp>
#include <sge/image2d/image2d.hpp>
#include <fcppt/math/dim/dim.hpp>

typedef
sge::sprite::parameters<sprite_choices>
sprite_parameters;

sprite_object ship(
sprite_parameters()
.texture(
sge::texture::part_ptr(
new sge::texture::part_raw(
sge::renderer::texture::create_planar_from_view(
sys.renderer(),
FCPPT_TEXT("ship.png"))->view(),
sge::renderer::texture::filter::linear,
sge::renderer::resource_flags::none))))
.texture_size()
.any_color(
sge::image::colors::white())
.center(
sprite_object::vector(
512,384))
.elements());


### The “named parameter” idiom

I guess you thought it was more straightforward, but there are a few “quirks” to see here. First of all, a sprite is initialized with a “helper structure” called sprite::parameters. In C++, you don’t have the ability to pass “named parameters” to functions, as in:

f(name = "foobar",age = 10,weight = 67.4);


You can only do

f("foobar",10,67.4);


Which is ugly and unsafe. Think about what would happen if you wrongly remembered the order to be first “weight”, then “age”. The compiler might complain, or it might not. Also, you might want to have more than one default parameter, which you cannot easily do in C++. So we construct a helper class:

class helper
{
public:
helper()
:
name_("name not specified"),
age_(-1),
weight_(-1.0)
{
}

helper &name(std::string _name) { name_ = _name; return *this; }
helper &weight(double _weight) { weight_ = _weight; return *this; }
helper &age(int _age) { age_ = _age; return *this; }

std::string name_;
int age_;
double weight_;
};


Now you can say:

f(helper().name("foobar").age(67).weight(37.6));


And you can even omit parameters, which will then be initialized to default values in the constructor of helper. sprite::parameters, however, doesn’t default-initialize the values you don’t specify, they’re undefined. There’s sprite::default_parameters which does default-initialization. Also note the call to elements() at the end of the initialization. This is mandatory and you will get a nasty compiler error if you omit it.

### Textures and parts

The texture creation looks a bit more complex, but every part of it is easily explained. I’ll explain it “from the inside out”. As you can see, we use the systems’ image_loader to load an image, which is returned as a shared_ptr. To create a planar (2D) texture from it, we need a “view” of that image. Think of it as a “complete” representation of the image, since to be really type safe, you cannot just return a raw char* or a void*, you need to encode more information in the return value. The other parameters to the create_planar_from_view function are self-explanatory. We choose linear filtering on the texture (arbitrary choice, really) and the texture has no “special” flags like “readable”. Also, you can ignore the address_mode stuff for now.

Which leaves us with the texture::part stuff. But that we already covered: It’s atlasing. You do not want the sprite to take a whole “raw” texture, just a part of it. But since we’re not using atlasing, we have to wrap our texture in a part_raw which means “take this texture, and take all of it”.

### The rest

The sprite’s color is chosen from a predefined set of colors so we don’t have to specify all four channels. The last statement, where the position is assigned, might be a bit puzzling. What’s this structure_cast doing there? And what’s a dim? It’s really simple: In fcppt, you have a vector type and a dim type. A vector has operations like “dot product” and “multiplication with a matrix” defined, a dim does not, since it’s nothing you usually want to do with a “size” type. This distinction is fundamental: Things which are similar but are used in a (completely) different context should have different types, or at least types with different names (typedefs)! This is not to annoy the library’s users but to help them. Many bugs are introduced because of some automatic conversion between types, or even identical types with different names (typedefs).

So, since center accepts a point but we’re giving it a dimension (the screen size), we have to structure_cast it to a vector.

### Drawing a sprite

Ok, we’ve got our sprite, but it merely sits there and isn’t drawn on the screen. Luckily, that’s much easier to explain than the sprite’s creation. First, the code:

sprite_system sprite_sys(
sys.renderer());

while (running)
{
sys.window().dispatch();

sge::renderer::scoped_block block(
sys.renderer());

sge::sprite::render_one(
sprite_sys,
ship);
}


Not much to explain here. sprite has a function render_one to render exactly one sprite (there are, of course, functions to render more sprites as well). The scoped_block is a very simple class which calls renderer->begin_rendering() on construction and renderer->end_rendering() on destruction. If you don’t call those functions, nothing will be drawn.

### Not quite there yet

If you compile and run the program at this point, however, it will crash. That’s because we’re trying to load an image but didn’t request an image loader from sge::systems. So, warp to the sge::systems initialization, add the following:

#include <sge/all_extensions.hpp>

sge::systems::instance sys(
sge::systems::list()
sge::image::capabilities_field::null(),
sge::all_extensions))
/* add rest of code here */);


Result of part 2

Compile, run, and see a cute spaceship in the middle of the screen (you have to watch out that “ship.png” is in your current working directory).

And we’re done for today. Long article for a small result, but it will not be the last time you’ve seen sge::sprite and I hope you have a good first impression of it. See you in part 3, where I’ll introduce input callbacks, so we can move the ship around.

# Footnotes

1 With multitexturing, you have multiple active textures, but that doesn’t matter here, you have to do a switch at some point.
2 We assume 4 color channels, one byte each channel, as well as 3 components for the position. Plus, we assume that floats are 4 bytes.
3 The astute reader might have noticed that we’re breaking our own rule here: Instead of putting everything sprite-related into a namespace, we prefix all of it with sprite_. We’ll remedy this situation later, when we’re giving the code some structure. For now, forgive me.

# Introduction

This is part 1 of the series I called “Game development in sge/C++”. There are two reasons for this tutorial:

1. There’s no real documentation for the game engine sge and no tutorial which shows how to use sge’s components in combination.
2. Most game developers using C++ have no idea of it and don’t use any “modern” programming patterns, just C with classes – and maybe a templated vector math class.

So I’m going to address both these issues in writing a game which uses sge and C++, explaining the language concepts as well as the game and engine concepts. But what kind of game? 2D/3D? Something extremely simple or something to build upon? After some thinking I’ve decided to design a 2D top-down shooter which should behave somewhat like the popular Linux game chromium B.S.U. or the classic Raptor: Call of the shadows. You fly a jet fighter close to a planet’s surface and shoot at other fighters and buildings on the ground (see the screenshot).

The game’s development process should cover 2D sprites for the objects, point sprites for the particle effects, sounds, multiple game states and maybe some texture hackery to get the background to pan (I’m not sure about that, yet). I’m probably going to set up a github repository for the project so you can browse the source there, but large portions (if not all of the code) will be posted here on the blog. Also, I’m not going to describe how to set up the project (downloading/installing sge and its dependencies, writing a makefile etc.), just the code. The articles make heavy use of fcppt, but I will explain everything that’s used from it in detail, so you don’t need to read up on it.

# Framework

This first article might be a bit dry – not a lot of code – but I have to explain some basics first, so bear with me.

But enough of the introductory crap, let’s cut to the chase. Where to begin? Well, we have to…

1. create a window to draw onto
3. set up a main loop which runs until the user either presses escape or closes the program using some window button

sge supports multiple platforms (Windows and Linux, currently), so everything platform-dependant has to be abstracted. If you want to write a DirectX renderer backend, for example, you have to implement the interface sge::renderer::device which contains functions like create_texture or create_vertex_buffer. When you’re finished writing the DirectX renderer implementation, you have to compile it to a dynamic link library (a dll in windows, an .so in Linux). Then the user can load your plugin, instantiate it and pass to it a window (which was created using another plugin, responsible for window creation).

But loading the dlls by hand would be really tedious. So sge provides an initialization class called sge::systems::instance which takes care of loading the dlls and resolving the plugin dependencies (stuff like: pass the window to the renderer). The head of our main file (call it main.cpp for now) looks like this:

#include <sge/systems/systems.hpp>
#include <sge/viewport/viewport.hpp>
#include <sge/renderer/renderer.hpp>
#include <sge/input/keyboard/keyboard.hpp>
#include <sge/window/window.hpp>
#include <fcppt/math/dim/dim.hpp>
#include <fcppt/container/bitfield/bitfield.hpp>
#include <fcppt/text.hpp>
#include <fcppt/exception.hpp>
#include <fcppt/io/cerr.hpp>
#include <iostream>
#include <ostream>
#include <exception>
#include <cstdlib>

int main()
try
{
sge::systems::instance sys(
sge::systems::list()
(sge::systems::window(
sge::window::simple_parameters(
FCPPT_TEXT("the_game"),
sge::window::dim(
1024,768))))
(sge::systems::renderer(
sge::renderer::parameters(
sge::renderer::visual_depth::depth32,
sge::renderer::depth_stencil_buffer::off,
sge::renderer::vsync::on,
sge::renderer::no_multi_sampling),
sge::viewport::center_on_resize(
sge::window::dim(1024,768))))
(sge::systems::input(
sge::systems::input_helper::keyboard_collector,
sge::systems::cursor_option_field::null())));
return EXIT_SUCCESS;
}
catch (fcppt::exception const &e)
{
fcppt::io::cerr << FCPPT_TEXT("Exception caught: ") << e.string() << FCPPT_TEXT("\n");
return EXIT_FAILURE;
}
catch (std::exception const &e)
{
std::cerr << "Exception caught: " << e.what() << "\n";
return EXIT_FAILURE;
}


You might already have some “wtf?!” moments when you read this code. Let’s first examine the structure and then the systems statement.

## Structure

### Notation

Then there’s this strange notation:

int main()
try
{
}
catch (...)
{
}
...


But that’s easily explained, it’s just a shorthand for

int main()
{
try
{
// ...
}
catch
{
// ...
}
}


so you save a level of indentation.

### namespaces

You might also have noticed that there are a lot of ‘::’ in the code. That’s because sge relies heavily on namespaces. For every subdirectory in the sge/include directory, a new namespace is created, resulting in long names like sge::input::keyboard::key_code, as seen below. At first sight, this might look “abnormal” and tedious to write, but the fact of the matter is: you cannot have too many (nested) namespaces!

Most C++ libraries do not respect that and squeeze everything into one base namespace – even boost does it! There’s boost::source and boost::target which make no sense on their own – until you realize that they actually belong to the boost graph library and receive a graph edge as the parameter. But other boost sub-libraries might like to define a “source” and a “target” function, too. Alas, without breaking old code, you cannot correct that mistake by moving source and target to boost::graph. The other way, lifting the functions to the base namespace, would be possible.

That’s because C++ has lots of syntax to make working with namespaces easier. There’s using, using namespace, typedef, namespaces can be renamed and so on. So if you feel tired of repeating some long namespace prefix, consider a local using namespace, for example.

### fcppt strings

Everything you see in the code above which comes from fcppt somehow relates to fcppt::string. Let me explain why. In C++, there are two character types: char and wchar_t. char is always 1 byte long, the size of wchar_t is implementation defined. In Windows it’s 2, in Linux it’s often 4 (which corresponds to the encodings used: UTF-16 and UTF-32). Consequently, there are two types of character literals: narrow and wide literals, as such:

char const [] chars = "foobar";
wchar_t const [] wchars = L"foobar";


System functions like CreateFile in Windows accept wchar_t. In Linux, calls like open accept char. The two systems use a different string type as the default. So the idea of fcppt::string is simple: Define its base character type as char or wchar_t depending on the operating system. But if you do that, you have to be consistent: You have to define a macro FCPPT_TEXT which wraps its argument in either "" or L"". You also have to define functions to convert from std::string and std::wstring to fcppt::string and so on. That’s why we have fcppt::io::cerr and fcppt::exception which operate on fcppt::char_type.

### systems

So next up: The statement which uses sge::systems::instance. Almost all the time, you want to initialize more than one sge subsystem, so sge::systems::instance receives a sge::systems::list of plugins. The list has an overloaded operator(), in case you’re wondering about the notation.

The first item in the list is the window, which gets the window title as parameter, as well as the window’s desired size (more on that in a later article). Then there’s the renderer which gets a lot more parameters, but most of them should be self-explanatory. You pass it bit depth, and some other rather uninteresting parameters.

The last part is the input system. This is a bit non-straightforward. For now, let’s assume we don’t want to concern ourselves with mouse input. In this case, we only need to acquire a “keyboard collector” object, which we’ll explain shortly. We can ignore the rest of the input initialization for now.

### Enter “main loop”

So that’s it. If you run the program, it should do absolutely nothing except display a window for a very short period of time. Not very exciting. We need at least a game loop which runs forever or until the user cancels it. We add the following code:

// <add other includes here>
#include <fcppt/signal/scoped_connection.hpp>

namespace
{
bool running;

void
exit_program(
sge::input::keyboard::key_event const &e)
{
if (e.pressed() && e.key_code() == sge::input::keyboard::key_code::escape)
running = false;
}
}

int main()
try
{

running = true;

fcppt::signal::scoped_connection const cb(
sys.keyboard_collector().key_callback(
&exit_program));

while(running)
{
sys.window().dispatch();
}
}


So we’ve got a boolean called “running” which obviously decides if the program has come to an end. We want this boolean to be false when the user has pressed the “escape” button on the keyboard. So we ask the input system to call the function exit_program whenever there was a key event, meaning a key up or key down (as you can see, key_event contains the function pressed to decide the key’s state). Inside this callback, we test if the key was pressed and if the key’s code was “escape”. Simple enough. But why is there no input “system”, only a “keyboard collector”?

The reason that it’s a “collector” and not just a “keyboard” is that you might have more than one keyboard attached to your computer. And you might even take advantage of that – think of a game with a split-screen mode where two players can compete using two keyboards on the same computer. Most times, though, you don’t want to treat the keyboards separately, so sge::input offers the collector which – well – collects all events from all keyboards and forwards them to the user, ignoring where they came from.

A quick word on the fcppt::signal::scoped_connection type: We’ll cover signals and binding when we get the jet fighter to move via the keyboard, so let’s just ignore that topic completely, at least for now.

Finally, the main loop consists of a lonely dispatch call which collects all window events – like a mouse move, a keyboard press etc. – from the main window and calls the registered callback functions (like our exit_program).

So folks, that’s it for now! I hope it wasn’t too boring. Next time we’ll be adding a jet fighter to our application using the sge::sprite subsystem.

## n-dimensional interpolation

In the course of writing a perlin noise class which takes the dimension as a template parameter, I was in need of a function which interpolates in n dimensions. Generally speaking, interpolation is the process of “guessing” of a function value between two “known” function values, whereas extrapolation is guessing a new function value outside of the range of “known” values (see figure 4).

Figure 4: Interpolation and extrapolation

We first consider linear interpolation in one dimension. In this case, you have an $\mathbb{R}$ vector space $V$ of values – meaning the elements have the following two operators:

$\alpha \cdot b: (\mathbb{R},V) \to V$
$a + b: (V,V) \to V$

Those are the elements you want to interpolate. They could be colors, numbers, vectors, time stamps (!?) or whatever you can imagine. From this vector space, take two elements $a_0, a_1 \in V$ and a scalar $t \in \mathbb{R}$ and calculate

$x = \textrm{interpolate}_1(t,a_0,a_1) = t \cdot a_0 + (1-t) \cdot a_1$

Fig. 1: One-dimensional interpolation

In figure 1, this is depicted for $V = \mathbb{R}^2$. Now suppose you want to interpolate not on a line but on a square. In two dimensions, you have four points $a_0,\ldots,a_3$ instead of two (the number always doubles) and two scalars $t_0,t_1 \in [0,1]$ (or one point $t \in [0,1]^2$), see figure 2 for an example.

Fig. 2: Two-dimensional interpolation

To interpolate the point $x$ between the four points, you have to do $2^n-1 = 3$ interpolations, two for each line and then “inbetween” the lines.

$x_0 = \textrm{interpolate}_1(t_0,a_0,a_1)$
$x_1 = \textrm{interpolate}_1(t_0,a_2,a_3)$
$x = \textrm{interpolate}_1(t_1,x_0,x_1)$

And just for the fun of it, it’s also possible in three dimensions (so interpolation in a cube), where you have to interpolate first on the “front” side, then the “back” side and then inside the cube. See figure 3.

Figure 3: Three-dimensional interpolation

The corresponding interpolations are:

$x_0 = \textrm{interpolate}_1(t_0,a_0,a_1)$
$x_1 = \textrm{interpolate}_1(t_0,a_2,a_3)$
$x_2 = \textrm{interpolate}_1(t_0,a_4,a_5)$
$x_3 = \textrm{interpolate}_1(t_0,a_6,a_7)$
$x_4 = \textrm{interpolate}_1(t_1,x_0,x_1)$
$x_5 = \textrm{interpolate}_1(t_1,x_2,x_3)$
$x = \textrm{interpolate}_1(t_2,x_4,x_5)$

## A general formula

Now as you can imagine, this can easily be generalized to n dimensions using some recursion formula. It took some time but I finally came up with one. Given a vector $a = (a_0,\ldots,a_{2^n}) \in V^{2^n}$ of points and one “position” vector $t \in [0,1]^n$, define the interpolation in n dimensions as:

$\textrm{interpolate}_n(a,t,i,f) := f(t_{n-1},\textrm{int}_0,\textrm{int}_1)$
$\textrm{interpolate}_1(a,t,i,f) := f(t_0,a_i,a_{i+1})$

with

$\textrm{int}_0 := \textrm{interpolate}_{n-1}(a,t,i,f)$
$\textrm{int}_1 := \textrm{interpolate}_{n-1}(a,t,i + 2^{n-1},f)$

There are two new “unknowns” here, $i$ and $f$. The first variable specifies the index to the vector $a$. It has to be 0 for the first function call. $f$ specifies which “base” interpolation method to use. In the descriptions above we always assumed linear interpolation (so $f = \textrm{interpolate}_1$), but other methods are possible, like polynomial interpolation:

$s(t) := 3t^2 - 2t^3$
$f(t,a_0,a_1) := s(t) \cdot a_0 + (1-s(t)) \cdot a_1$

This gives a more smooth transition at the boundaries (in case you’re interpolating on a grid, see below), since the polynomial is smooth at 0 and 1, respectively. See figure 5.

Figure 5: Linear vs. polynomial interpolation

To show that the formula makes sense, let’s apply it to the 2D case: We’ve got 4 values $a_0,a_1,a_2,a_3 \in V$, a point $t=(t_0,t_1)$ and an interpolation function $f$. The first function call has to be with $i=0$. We get:

$\textrm{interpolate}_2(a,t,0,f) = f(t_1,\textrm{interpolate}_1(a,t,0,f),\textrm{interpolate}_1(a,t,2,f))$
$\textrm{interpolate}_1(a,t,0,f) = f(t_0,a_0,a_1)$
$\textrm{interpolate}_1(a,t,2,f) = f(t_0,a_2,a_3)$

So indeed, we get back our 2D interpolation function. The same holds for 3D.

## Interpolation inside a grid

Now, say you’re in an n-dimensional grid $G: \mathbb{N}^n \to V$ where you have values of type $V$ at each edge $G_{i_1,\ldots,i_{n}}$. You’re given a point $p \in \mathbb{R}^n$ which is inside this grid – but not neccessarily at the discrete edges – and you want to calculate the value at $p$ using some basis interpolation function $f$. The first thing you need to do is calculate the grid point which is at the “lower left” of the point $p$. This is easy:

$\tilde{p} = (\lfloor p_0 \rfloor,\ldots,\lfloor p_{n-1} \rfloor)$

meaning you just round down component-wise. Next, you need all of $p$‘s other neighbors, and you need them in a predictable order, so our function $\textrm{interpolate}_n$ can work on it. This sequence of neighbors is shown in figures 1 to 3 already. We have to generate the binary vector sequence of dimension n. In dimension 2, it consists of four elements:

$((0,0),(0,1),(1,0),(1,1))$

In dimension 3, it consists of 8 elements:

$((0,0,0),(0,0,1),(0,1,0),(0,1,1),(1,0,0),(1,0,1),(1,1,0),(1,1,1))$

Its definition is actually pretty simple: Take the numbers from $0$ to $2^{n-1}$ and assign it the coefficient sequence $(a_0,\ldots,a_n)$ of the 2-adic expansion (think of the binary representation of a number):

$\sum_{i=0}^{n-1} a_i 2^i$

Or, algorithmically, construct the sequence via (pseudocode):

vector_sequence binary_vectors(int n,vector v)
{
vector_sequence result;
if (n == 0)
{
v[0] = 0;
result.insert(v);
v[0] = 1;
result.insert(v);
}
else
{
v[n] = 0;
result += binary_vectors(n-1,v);
v[n] = 1;
result += binary_vectors(n-1,v);
}
return result;
}


To calculate the interpolated point $p$, all you have to do is pass $p - \tilde{p}$ as well as the binary vectors to the function interpolate and you’re done:

value interpolate_in_grid(grid g,real_vector p)
{
vector ptilde = floor(p);
// Pass the vector (0,0,...) as the starting value for binary_vectors
vector_sequence neighbors = binary_vectors(n,null_vector<n>());

// Add the current position to the binary vectors to get the
// "real" neighbors of our point.
foreach (vector &v,neighbors)
v = ptilde + v;

// Map from the grid _positions_ to the _values_ at the positions
// (which could be colors, scalars, ...)
value_sequence values;
foreach (const vector v,neighbors)
// Access grid at position v
values.push_back(g[v]);

return
interpolate(
n,
values,
p - ptilde,
// Start at index 0
0,
// Use linear interpolation
linear_interpolation);
}

Posted in Uncategorized | | 2 Comments

## Code snippet: easy lexicographical ordering

The C++ standard library provides the std::lexicographical_compare function which takes two iterator ranges and returns a boolean, indicating which of the ranges is smaller. But what if we have a data structure like this:

struct foo
{
int a;
std::string b;
char c;

bool operator<(foo const &) const;
};


for which we want to write an operator< which does a lexicographical comparison of a, b and c. We end up with the following:

bool foo::operator<(foo const &r) const
{
if (a < r.a)
return true;
if (r.a < a)
return false;
if (b < r.b)
return true;
if (r.b < b)
return false;
return c < r.c;
}


In case you’re wondering, I’m assuming we work on types which only have an operator<, which is why I cannot use operator> and operator!=.

Now, this is tedious to write and error prone. And it can be solved more elegantly. You might know that std::pair has an operator< which does lexicographical comparison. So I checked if boost::tuple has this feature, too. As it turns out, it does! So the above code can be reduced to:

bool foo::operator<(foo const &r) const
{
return boost::make_tuple(a,b,c) < boost::make_tuple(r.a,r.b,r.c);
}


Note that you need the tuple/tuple_comparison.hpp header for this to work. I checked the latest C++0x draft and it confirms that this feature is ported to C++0x, too. And if you’re lucky, all of the tuple stuff in the above code will be optimized away, so the handwritten code from above is left.

## Designing a filter graph for post-processing effects

Today, every game comes with a set of postprocessing (pp) effects. Those are effects which are applied to the already rendered scene, meaning they don’t deal with geometry, just with textures. One of the earliest and most widely used pp effects is the so called bloom effect (which should not be confused with the bloom filter data structure).

The bloom effect applied to a picture of a church window

A game utilizing this pp effect does the following:

1. Render scene to texture original
2. Render texture original to texture highlight using a filter which extracts the bright parts of the image
3. Render texture highlight to texture blur using some sort of low pass filter, possibly a (separable) 5×5 Gaussian blur multiple times to really smooth it.
4. Render both original and blur to result, combining the two textures somehow (maybe just add them together).

This process can be conveniently expressed using a directed graph:

Directed graph for the bloom effect

So the first idea is to represent this chain of filters as a directed graph using boost::graph. The library has the ability to do a topological sort on the filters, so you don’t have to worry about the correct order of application. What you do have to worry about is how to define what a “filter” really is and what information to store at the graph nodes. In the graph above, you can spot three types of filters: nullary filters  take no texture as input but produce a texture as output (such as the filter to render the scene to a texture); unary filters take a texture as input and produce a texture as output (highlight, blur); and finally, binary filters take two textures and produce a result (the combine filter).

More generally we could define an n-ary filter as a function which takes n textures and produces exactly one texture as a result. So in the above example you have to do the following (and this is what is currently implemented in fruicut):

1. Create the 4 filters, add them to the graph (specifying the dependencies of each filter). This might be done only done once at program startup. But it’s also conceivable that you might add or delete filters later to increase performance. Also, let’s assume for now that the graph stores references to a common base type filters::base.
2. Sort the filters in topological order.
3. For each filter f in the resulting sequence, check how many predecessors it has. Try to dynamic_cast f to either filters::nullary, filters::unary or filters::binary. If that doesn’t work, there’s something wrong with the graph.
4. If the filter is nullary, apply it, store its result somewhere. If the filter is not nullary, collect all result textures from the predecessors and apply the filter. Again, store the result somewhere.
5. Take the result of the last filter in the sequence and render it to the framebuffer (this could also be done with a unary filter which produces an empty texture as output, but that’s an implementation detail).

So with this approach, a filter is basically a wrapper around a shader plus one or more textures (the blur shader, for example, needs two textures if it’s implemented with a Gaussian). Pretty simple, really, but there are a bunch of problems with this approach: First of all, a filter might produce more than one result. OpenGL and DirectX both support multiple render targets at once (making deferred shading feasible). But that’s nothing I personally miss – at least currently – so I won’t discuss this further.

Secondly, what if you want to use a filter more than once? Maybe you want another cool effect which, again, needs the blur filter. You’d have to create the filter twice, thus load the shader twice and create the textures twice.
Textures present a more general problem: In the above approach – you create too many of them! Assuming you add another effect using the blur shader to just blur the whole screen (something you might do to add a smooth fadeout), let’s see exactly how many.

Extended filter graph with texture indicators

Here, we assumed that the screen resolution is 1024×768 (which is really, really low by today’s standards) and that the bloom shader works on smaller, 512×512 textures, since that’s usually the case. As you can see, we create 4 (!!!) 1024×768 textures and 3 with size 512×512, when clearly, we can do better. For example, when combine finishes, the textures for highlight, blur and original are not used anymore and could be re-used in the last blur stage. This effect saves more textures the more independent “filter paths” you have.

This problem could be tackled with a relatively simple “texture manager” which you can query for a texture:

texture_ptr t = texture_manager.query(dim(1024,768),filter::linear,flags::none);


The above would try to retrieve an existing texture from a pool of “non locked” textures and create a new texture if the query fails. This could also be implemented with a proxy class like this:

lazy_texture t = texture_manager.query(dim(1024,768),filter::linear,flags::none);

// later that day...

texture_ptr real_texture = t.get_texture();

// work with real_texture


No matter how you implement it, it will slow down the first render frame, since all the textures have to be loaded.

In a similar way, we could add a “shader pool” which stores a (ptr_)map from a pair of vertex and fragment shader file names to a loaded shader for later retrieval.

In this scheme, a filter would receive said texture manager as well as said shader pool, making it very lightweight (even copyable).

So those are my thoughts for today. Maybe I find enough time to implement the texture and shader pool – if I don’t get any better ideas, that is.

## A very simple locked_value

I noticed that boost (or more specifically boost::thread) doesn’t directly support the notion of a value (of any type) that can be read and written to by multiple threads. So I used boost::mutex and boost::lock_guard to write one:

#include <boost/thread/mutex.hpp>
#include <fcppt/noncopyable.hpp>

template<typename T>
class locked_value
{
FCPPT_NONCOPYABLE(locked_value)
private:
typedef
boost::mutex
lockable;

typedef
boost::lock_guard<lockable>
lock;
public:
typedef
T
value_type;

explicit
locked_value(
value_type const &_value)
:
value_(
_value)
{
}

explicit
locked_value()
{
}

value_type const
value() const
{
lock lock_(
mutex_);
return value_;
}

void
value(
value_type const &t)
{
lock lock_(
mutex_);
value_ = t;
}
private:
value_type value_;
mutable lockable mutex_;
};


Notice the usage of the mutable specifier for the mutex_ variable, one of its few smart uses. Also note that I used the noncopyable macro from fcppt, which is “cleaner” than deriving from boost::noncopyable and can even be implemented using the deleted operators from C++0x (although that’s currently not the case).

Here’s a little test program just to show how it is used (assuming you saved the above code in a file locked_value.hpp):

#include "locked_value.hpp"
#include <sge/time/timer.hpp>
#include <sge/time/second.hpp>
#include <sge/time/millisecond.hpp>
#include <boost/bind.hpp>
#include <boost/ref.hpp>
#include <iostream>

namespace
{
void
locked_value<int> &v)
{
std::cout << "Changing value\n";

sge::time::timer timer(
sge::time::second(1));

while (true)
if (timer.update_b())
v.value(
v.value()+1);
}
}

int main()
{
locked_value<int> my_int(
0);
boost::bind(
boost::ref(
my_int)));

sge::time::timer timer(
sge::time::millisecond(
100));
while (true)
{
if (timer.update_b())
std::cout << my_int.value() << '\n';
}
}


As you can see, you need sge for this. I used it because it provides a very handy timer class.