I was doing a bit of searching and thinking. I want to design some sort of simple 3D engine completely from scratch (I am doing this for experience and teaching myself better math etc). I have thought of ways to store 3d points and do proper rotations (some simple vector math), but I'm having a hard time thinking through proper bitmap wrapping onto a 3D surface. My first thought was to cut up the bitmap and render the chunks of bitmap within triangles. Using triangles would be handy because then I could use linear transformations with a matrix to properly fit the bitmap within my triangle. However I have noticed I encounter a strange perspective issue...well the issue being I cannot render the pixels within the triangle to properly adhere to perspective distortions. So then it made me start thinking that linear transformations will not work for me because I have to make the bitmap distort with a non-linear transformation and make two edges of the box in the transformation become non-parallel.

Can someone lead me onto a more proper mode of thinking? Like not an exact correction but the concept that is being used here at least? Thanks.

Well, I thought on it some more...I can extend the thought of triangles to splitting the image into more triangles and "chopping up" the image into more an more triangles to create the perspective per pixel....this would allow quality control for it as a whole...but isn't there some way to transform the picture to get a nonlinear transform??

By nonlinear, do you mean quadratic transformation or something totally different? You could think of things in terms of planes and go from there perhaps. It would be different than your current triangle methodology.

If you can wait for F10, or if you want to play around with the beta out now, you can do perspective distortion automatically with the new drawing API.

By nonlinear, do you mean quadratic transformation or something totally different? You could think of things in terms of planes and go from there perhaps. It would be different than your current triangle methodology.

Just a thought.

Non-linear as in bringing only two corners of a square together. My idea of chopping it up into more triangles definitely worked; however, it still isn't satisfactory in that it just doesn't feel efficient enough...also it doesn't feel right because it's still not properly adjusting perspective of the square perfectly.

And on the note of flash 10 that sounds amazing O_O, BUT I have my doubts about it all the same T_T. As is typical flash they'll make something useful but the fact that it is in flash will cause it to come at a big CPU price :P...thanks for the tip though, I'll start messing with the beta ^_^

"Affine" is probably a better word here than linear. All the transforms supported by flash's matrix class (shear/rotate/stretch) are affine. Perspective projection is not affine because it doesn't preserve parallel lines. For now, approximating the perspective transform by tesselating/cutting the drawn shape into small affine chunks is the best you can do. Most flash 3D engines (PV3D, Away3D, Sandy) do exactly what you're doing: cut the surface with triangles. You can also obtain decent results if you split a surface scanline by scanline, similar to how the first generations of 3D games worked (wolf3d, doom, descent, quake).

Like JS said, the new graphics API of Flash 10 will support perspective transforms with a graphics.drawTriangles method similar to the graphics.beginBitmapFill method of Flash 9. Even without hardware acceleration, I expect this new method to be much speedier than any of the existing tesselation approaches.

Righto, I figured nothing could be done with the current matrix capabilities, it was just a small hope for something someone might have known ^_^.

Also, "affine transformation" is one way of putting it...but basically what I wanted accomplished only dealt with perspective distortion and not both that and a translation so linear transformation worked :P, of course that's just nitty gritty terminology and being very specific and almost pointless for me bringing up XD

So now confirmed that the triangles is da way, what would you say would be the most efficient way of cutting the said triangles from the bitmap? My way seems to bog down the processing by too much T_T, the point calcuation method I have is quite plenty efficient for now (and can be optimized much more) but the bitmap cutting and rendering is annoying...some quick questions should help me think through this best:

Should the triangle cuts be made "on demand" from a stored square bitmap or should the cuts be made once then stored into memory to be called out for transformation?

See, i'm not sure what future problems I may encounter while making the engine far more in depth (with all the sorting and visual corrections associated with making everything render properly in all conditions :P). If I have only certain triangles available to me, will that impede on making proper visual corrections? Do I need on demand cutting to fix certain rendered problems?

This will again lead me in the right thinking process ^_^, thanks for all the previous answers and future answers!

So now confirmed that the triangles is da way, what would you say would be the most efficient way of cutting the said triangles from the bitmap? My way seems to bog down the processing by too much T_T, the point calcuation method I have is quite plenty efficient for now (and can be optimized much more) but the bitmap cutting and rendering is annoying...some quick questions should help me think through this best:

Should the triangle cuts be made "on demand" from a stored square bitmap or should the cuts be made once then stored into memory to be called out for transformation?

I don't have much expertise here, I am more of a scanline guy than a triangle guy. But I think the former is a better policy. Perspective correction is all about z, or focal length, or whatever you choose to call the distance from the camera to a point on the surface. Any surface of constant z can be mapped affinely. If you're looking straight at a polygon (ie it lies in a plane perpendicular to your look direction) you can spit it out with just one graphics.beginBitmapFill() operation.

For a flat polygon in an arbitrary 3D orientation, the contours of constant z are straight lines in screen space. As an example, for a wall in wolf3d the lines of constant z happen to be vertical scanlines. For a floor in doom, the lines of constant z are horizontal scanlines. For a surface in quake, the line of constant z can be oriented arbitrarily in screen space. To visualize/compute the lines of constant z, take the surface normal and project it onto the screen. Call this vector V. V points along gradient(z), so lines of constant z are perpendicular to this V. The magnitude of V also gives you a clue as to how much z changes for each step along x and y (larger vector = more change in z per pixel = greater subdivision density required).

Originally Posted by Diniden

If I have only certain triangles available to me, will that impede on making proper visual corrections? Do I need on demand cutting to fix certain rendered problems?

My opinion is yes on both counts. When tesselating a polygon in screen space into a mesh of triangles, I think the most important thing to avoid is placing a mesh edge along (or close to) the direction of V. The one time I tried the triangle-subdivision approach, I didn't really respect this rule and I think it leads to bad visual artifacts. It will make your mesh strongly anisotropic. Your triangles will be long along the axis of constant z, and narrow along gradient(z), but I think that's the right way to do it.

EDIT: If I were you, I would learn the drawing API of Flash 10 instead of implementing an affine tesselator in Flash 9. Flash 10 will be out soon and will (thankfully) make this tesselation approach obsolete.

Well it seems I will be wanting to rethink my approach...

I gave some thought on the triangle approach, but could not think of how to make the drawing very efficient...then it occurred to me that I could possibly use rays in this...as I recall from a linear algebra discussion I had a while back, I could probably shoot out rays and find polygon intersections...and whichever polygon I intersect I map the targeted point on that polygon to the bitmap I am mapping to the surface of the polygon. Then in that bitmap map I can easily select which pixel to be drawn (or group of pixels) then I'd draw out that pixel data to the screen...and depending on closeness of the ray intersection determines the draw order of the bitmap data...so essentially I could just work with a single bitmap for output render... I'm not sure if this will have some visual errors, but I can see it cleaning up a lot of rendering problems...also, I'm leery of how bad it'll be for the processing :x....Most of it will be simple vector math so possibly it'll work out real nice ^_^...right now I can hit about 1000 points and map them to two separate vector spaces at about 170 fps...so we'll see how this ray idea I got cooking will work...

Can you see any issues that may arise with this outlook? I believe it essentially solves out a lot of the pixel z perspective by just the nature of the 3D vector space I'm using...unless I'm misthinking something...I'll lose detail in the textures I use (to save on processing) but I will also be able to regulate how detailed the image comes out...

I was thinking about same thing for a long time. 1000 points means roughly 32x32 grid, which looks like this. this is decent level of details (compared to this, for example), but the main focus should be how to efficiently track edges.

@rachil, this is essentially scanline-like or voxel-like approach, so if you could put your brains into research mode on this, I think we'd have something.

saying efficiently track edges...are you referring to finding the triangles side? My point calculation method works quite decent...I have 2 different 3D vector spaces...one vector space is just a storage, the second vector space is determined by a basis (which is determined by the camera).

I'm thinking of mapping the rays into the second vector space but keeping them mapped within the first vector space. This way I can precalculate plane normals and all that stuff and just shoot my rays from the second vector space to read from the first...So with this I can already have all edges and lines precalculated before I even mess with the rays...then the rays will tell me what shapes and edges I'll need for rendering but most of the heavier duty calculation will already be thought out (I'll just have to be calculating the actual ray intersection and see if it's collision point lies within the given triangle)...

So was it THOSE edges you were referring to? (remember I havn't read up on 3D engines yet so I don't know a lot of the terminology of designing an engine

I'm still thinking through how to map my ray hit location to the flat bitmap though...I may just go ahead with the precut triangle bitmap datas then slap them to a screen with a matrix transform...it'll save a lot on drawing processing to have the data already there...but I fear matrix transforms and a buncha bitmaps floating around the screen will bog processing down a lot...So more thought to be had....of course I could have each ray just get a color data and add that color data to the screen, but to get pristine detail I'd have to do it for every pixel which flash would explode over....and if I sperated them my picture would pixelate way too much XD, I'll give the matrix transform idea a try....it may work better than I expect if I streamline it well...

Also rachil, I did a quick test...I typically use a 700x600 screen for whatever I design. So I did a loop for getting a pixel and setting a pixel on those dimensions...best frame rate I could get with ONLY those type of commands was about 10 fps...

So I dunno about the approach of scanlining the picture...well actually I havn't tested with it too much...you could probably adjust the detail of the scanning and make each scan like 2 lines or 3 lines apiece which would help a lot...hmmmmm...yeah I'll go down this ray approach thingummy and we'll see what you can come up with ^_^ I bet you'll find something crazy good for rendering...*is still leery about flash 10 and thinks there may be a better way XD*...actually if flash 10's thing does work out well then my little triangle transform idea may work out very well O_O hmmmm...too many options ;~;

EDIT: Oh wow, and I just thought of how to map my intersection to the proper bitmap data...instead of cutting up my bitmap into predefined triangles, I'll just map out my triangle points onto the flat bitmap, then each triangle in the first vector space will store a precalculated basis for transforming a precalculated basis for the bitmap coordinates to the faces location, then the intersection point will be transformed according to the new determined basis (basis calculations are all generally very short and sweet and basic calculations +-*/)...I'll just have to think on the math for a bit to see if that idea will suffice and be applicable....

edges I referred to are visible edges of scene objects. they need to be followed by triangles or else they would look like s|-|it. i.e., we need some sure way to shoot more rays towards object edges, and less towards their sides facing the camera, or else this idea will not work.

Righto! I see the issue here...well, I'll first give the shot of finding the face with the rays at first...then the face will have stored in it the bitmap data for that face (which will be only a triangles worth of bitmap data) then I will linearly transform that data to the faces 3 coordinates...that will be just a start for getting my ray engine working...then from there I'll keep your edge tracking in mind and try to think of something dubiously deviously splendid :P

btw...what is the most efficient method flash has for editing pixels? And here's another question, is the beginBitmapFill faster than copying pixels or get and set pixel?

I have just developed a good way of utilizing my rays for finding all visible faces still working out the recursive algorithm, but here's the concept. I shoot a long ray from my field of vision until I find a face, once that face is found I shoot out rays from just outside (by 1 pixel) each side of the face. The shot out rays are orthogonal to the screen. When they find a face repeat that procedure. This will find the visible faces to snag. I'm just working out when to stop the recurse to stop it at the edge of the field of vision. I'm also working on making this recursion efficient by deciding which sides not to shoot a ray (ie- shoot only on sides that are not the same direction (left right up down) from the ray that found the current face), also as I discussed and found in another forum I will use the normals of each face to determine which faces are behind the camera, or if I'm correct, in my second vector space every face that lies on the positive x-axis is behind the camera, so I will eliminate them from the check.

Luckily all my normals etc are calculated in the first vector space so finding all of those can be found easily by just using my second vector space basis.

So, we'll see how this goes X_X this is the biggest developement I have had yet in my thought process :P See any issues that may occurr other than run down on processor XD?

And I just got another idea...I could have my field of vision lines be four other vector spaces and I can solve for their basis. Then I can easily transform their basis by using the cameras vector space basis X_X! then finding if the ray is outside the field of vision is simply marvelously easy by seeing if the recursed ray's start point is in the positive x-axis of the field of visions basis X_X!! But of course that is if my positive x axis idea works for my camera )...

The reason why I am saying positive x-axis is that's just how it worked out with my vector math T_T I can't exactly fully explain it to you, but at angle 0,0,0 x-axis points at and away from the screen, y-axis goes left to right, and z-axis goes up and down. Don't question it just accept it is all I can say XD