I think 3D geometry has a lot of quirks and has so many results that un_intuitively don’t hold up. In the link I share a discussion with ChatGPT where I asked the following:
assume a plane defined by a point A=(x_0,y_0,z_0), and normal vector n=(a,b,c) which doesn’t matter here, suppose a point P=(x,y,z) also sitting on the space R^3. Question is:
If H is a point on the plane such that (AH) is perpendicular to (PH), does it follow immediately that H is the projection of P on the plane ?
I suspected the answer is no before asking, but GPT gives the wrong answer “yes”, then corrects it afterwards.
So Don’t we need more education about the 3D space in highschools really? It shouldn’t be that hard to recall such simple properties on the fly, even for the best knowledge retrieving tool at the moment.
Back in 2001, I wrote my own 3D graphics engine, down to the individual pixel rendering, shading, camera tracking, Z buffer, hell even error diffusion dithering for 256 color palettes.
And I still don’t know half the terms you just used.
I do know points, polygons, vectors, normals, roll, pitch, yaw, Lambert’s Law shading, error diffusion feedback…
And my Calculus 2 teacher admired my works and told me I had the understanding of a Calculus 4 student.
impressive, I’d like to ask abou stuff like how long it took you and stuff. But in this discussion I’d like to mention that I didn’t use any complicated terms, only orthogonal projection (middle school) and perpendicularity (elementary school).
I started from the ground up in December 1998 with a bare wireframe engine, largely inspired from a demo wireframe engine from another developer. I was 17 years old then so it was basically my after school project, not a school assignment, but my teachers were impressed.
I didn’t quite just copy/paste his code though, I carefully read over his code and comments to the point that I understood how it all worked, and rewrote a much cleaner wireframe engine of my own that supported colored lines and even loading from files, which the original demo didn’t support.
Later on I came across another demo, from the same developer I think, that demonstrated rendering solid triangle shaded 3D models. Again, I read over everything and rewrote everything from the ground up, largely looking to optimize the rendering technique for the highest number of polygons per second, and of course to be able to load different models from file.
Then I just started having a bit of fun with the polygon rendering, starting with an optimized integer based greyscale gouraud shading algorithm, which ran way faster than any similar demos I could find at the time. Note that this was all CPU driven, no fancy GPU at the time, the 3Dfx Voodoo was still a pretty new thing I couldn’t afford…
Then I got the idea of trying to bring color to the project via error diffusion, since I was basically limited to 320x200x256 color display mode, unless I wanted to run a high end video mode at a snail’s pace LOL! Error diffusion is slow though, so how did I speed that up?
Well, I did away with the gouraud shading and went back to treating each polygon as a single solid RGB color, shaded using the Lambert’s Law technique. To speed up the error diffusion process, I’d only process 8 pixels into the diffusion algorithm, then as the polygon rendered, it would just pick randomly from that 8 pixel buffer.
Since I was programming in QuickBasic, arrays were limited to 64KB each, meaning that memory was very tight, and I actually had to allocate two arrays for the Z Buffer, one for the top half of the screen and another for the bottom half.
The inspiration for the camera tracking came from a rather unexpected source, a simple mouse string toy demo of all things LOL! I realized that if I used just one segment of that string algorithm, I could link the viewing angle to follow a point in the model, or with some creative adjustments, basically follow any arbitrary point.
I also made a side project crude CAD scripting thing of sorts, mainly meant to render a torus or sections of a torus with whatever dimensions I wanted. With the right inputs, that also allowed me to easily generate spheres, cylinders, cones and tubes.
I think I finished the original wireframe engine within just a couple or few days, but the other versions that had filled in polygons probably took me a week to start with, and the more advanced techniques probably took me around 2 months each, all in my spare time of course.
I didn’t really have any final product in mind, I was just experimenting and learning ya know. When 3D GPUs started becoming a big and common thing, I didn’t see much future for my little project, but I sure did learn a lot!
ADHD driven hard work could never disappoint huh?
But what was the advantage of QuickBasic? Weren’t C++ and Javascript around at the time? I only hear about them in this context
Things were different back then. QBasic was free yo, I couldn’t afford $200 or whatever for paid development software. Besides, I was just starting to learn anyways.
Later on I did end up finding a pirated copy of the full QuickBasic 4.5 at least, which allowed more RAM usage for my programs.
Edit: In a parallel universe, if I could have afforded it, I might have otherwise started with Borland Delphi.
retro computing was so chad