For you other techies and tech-junkies out there, here's a little technical glance at things...
The engine is smart and knows how to generate meshes from data. So in the test application code, creating a quad, for example, is as easy as this:
Code:
void _createMesh() {
var vertices = new IVertex[] {
new VertexPosNorm(new Vector4(-1, 1, 0, 1), Vector3.Forward),
new VertexPosNorm(new Vector4(1, 1, 0, 1), Vector3.Forward),
new VertexPosNorm(new Vector4(-1, -1, 0, 1), Vector3.Forward),
new VertexPosNorm(new Vector4(1, -1, 0, 1), Vector3.Forward),
};
MeshDataIndexed mdi = new MeshDataIndexed(vertices, new uint[] { 1, 0, 2, 3, 1, 2 });
triMesh = MeshIndexed10.FromIndexedMeshData(mdi, context);
}
"IVertex" is an interface that all vertex-type structures must implement (in this case, VertexPosNorm -- a vertex-type structure that holds only a position and normal vector). This not only makes the code that handles vertices and meshes within the engine much simpler but means user-code can implement custom vertex types the engine has never seen before and they will all work perfectly and seamlessly!
If you're familiar with DirectX you might be wondering how it's so simple... where's all the Input Layout data for the input assembler? How is this data serialized? How is it bound to the GPU? The answer is the engine! It's smart enough to do all of this behind the scenes as long as it has a little information. When you create a new vertex-type for example, the engine has a special attribute class which allows you to specify HLSL input information for fields in the vertex. Here's an example of it:
Code:
// Example fields from the VertexPosNorm structure:
[HLSLInputBinding(HLSLSemantics.Position, Format.R32G32B32A32_Float)]
public Vector4 Position;
[HLSLInputBinding(HLSLSemantics.Normal, Format.R32G32B32_Float)]
public Vector3 Normal;
That's all there is to it! The engine reflects the type of vertex and reads in the HLSLInputBinding attribute to gather the necessary information for generating InputElements... And the Input Layout for the Input Assembler is generated on-the-fly! 
When you actually want to draw your mesh you will find that it is equally easy and pain-free. All you have to do is create a simple render operation; implemented in the engine as the "RenderOp" class. Here's an example of the code used to draw a mesh:
Code:
var op = new RenderOp(triMesh, mat, setup); // We pass in the mesh, material and shading setup
renderer.PushOp( op ); // One way of doing it
And that's it! Your mesh will be drawn on the next draw cycle! There are many ways to customize it and do all sorts of neat things, but that's just a basic way to draw a mesh with engine. 
EDIT:
I didn't explain the concept of this "Shading Setup" I mentioned. Basically, it's a way to convey information to or control a shader. For example, you can use a shading setup object on a HLSL shader to dynamically select different techniques and decide which passes therein to include/exclude. Furthermore, you could use it to apply data/variables on the instance or batch level. There's a lot of stuff you can do with it! Though it's not implemented yet, it will soon be possible to actually script a shading setup in a custom (and very simple/easy) scripting language. If you don't specify any shading setup for a render operation it will just use the default technique and all of its passes. So it's not necessary, but is quite handy and powerful!