Open prochazkaml opened 2 years ago
Hey there, nice to know this is still finding people :) And you're absolutely right in that being able to convert to the model format is pretty important for this whole engine to be useful.
Regarding your question, I can't recall the tri model format being elaboratively documented anytime, BUT it's not very complex especially if you take a look at the tri image format first, which is (somewhat) documented, since they share semantics: https://github.com/albe/openTri/blob/main/src/triImage.txt Since tri model files can contain tri images (for textures), you need to understand the latter anyway. Both formats are RIFF style (4 byte block identifier, followed by a 32bit int storing the size of the following block of data - https://en.wikipedia.org/wiki/Resource_Interchange_File_Format). Each block then consists of a block-specific header with some meta information about the data contained, like format, encoding, etc. The base layout of the model format is then roughly drawn out in the header file here: https://github.com/albe/openTri/blob/main/src/triModel.h#L41-L46 The nice property about this format is, that you can "incrementally" write a reader for it, as it can just skip chunks that it doesn't know how to read yet. See https://github.com/albe/openTri/blob/main/src/triModel.c#L401-L415 and lines before that. For vertex formats the reference is https://github.com/albe/openTri/blob/main/src/triTypes.h#L442
The basic idea of all the custom file formats was to store data as close to the PSP hardware formats (e.g. swizzled RGB565 images or TRIANGLE_STRIP ordered UVN 8bit vertices) and optionally just apply some gzip for compression.
I think Tomaz back then wrote a small tool to read OBJ (https://en.wikipedia.org/wiki/Wavefront_.obj_file) and/or Quake MD2/3 models and save them in our format (that's how we got the models for the cars of the game we wanted to write with this engine - https://github.com/albe/openTri/tree/main/src/game/data and https://github.com/albe/openTri/tree/main/src/tests/models), but I don't have hold of that code.
If something still turns out unclear, feel free to ask.
Hi, thank you so much for your speedy response! :)
I suspected that the model format would be a container for many different file types (as I saw "triImage" as well when looking at the file in a hex editor, so it had to also contain texture data along with the mesh), but it's definitely nice to know that it's just a modified RIFF file (as I can see some extra headers at the start).
I'll probably make a converter for simple model files without texture data first (single color only), then I'll expand into textured stuff (stealing the swizzling routine from triImage.c seems like the best choice here).
Well, now I'll have to get to work, I guess! I'll post back if I get something that is worth sharing actually working.
Alright, so with the information that you've given, I managed to construct a simple .trim file (a white cube) in NASM (yes, I created a binary file in an assembler, don't judge me):
mainheader:
.magic db "triModel"
.numMeshes dw 1
.numModels dw 1
.numTexs dw 0
.reserved dw 0
mesh:
.magic db "tMhH"
.size dd .chunk_end - .chunk_start
.chunk_start:
.name db "Mesh 0 "
.vertFormat dd 0b110011100 ; GU_COLOR_8888|GU_VERTEX_32BITF
.numVerts dw (.chunk_end - .verts) / 16
.flags dw 0 ; No compression, GU_TRIANGLES
.vertSize dw 16 ; Color, X, Y, Z
.texID dw 0
.dataSize dd .chunk_end - .verts
.verts: ; XYZ extracted from an STL generated by the OpenSCAD command "cube(1, center = true);"
dd 0xFFFFFFFF, -0.5, 0.5, 0.5
dd 0xFFFFFFFF, 0.5, -0.5, 0.5
dd 0xFFFFFFFF, 0.5, 0.5, 0.5
dd 0xFFFFFFFF, 0.5, -0.5, 0.5
dd 0xFFFFFFFF, -0.5, 0.5, 0.5
dd 0xFFFFFFFF, -0.5, -0.5, 0.5
dd 0xFFFFFFFF, -0.5, -0.5, -0.5
dd 0xFFFFFFFF, 0.5, 0.5, -0.5
dd 0xFFFFFFFF, 0.5, -0.5, -0.5
dd 0xFFFFFFFF, 0.5, 0.5, -0.5
dd 0xFFFFFFFF, -0.5, -0.5, -0.5
dd 0xFFFFFFFF, -0.5, 0.5, -0.5
dd 0xFFFFFFFF, -0.5, -0.5, -0.5
dd 0xFFFFFFFF, 0.5, -0.5, 0.5
dd 0xFFFFFFFF, -0.5, -0.5, 0.5
dd 0xFFFFFFFF, 0.5, -0.5, 0.5
dd 0xFFFFFFFF, -0.5, -0.5, -0.5
dd 0xFFFFFFFF, 0.5, -0.5, -0.5
dd 0xFFFFFFFF, 0.5, -0.5, 0.5
dd 0xFFFFFFFF, 0.5, 0.5, -0.5
dd 0xFFFFFFFF, 0.5, 0.5, 0.5
dd 0xFFFFFFFF, 0.5, 0.5, -0.5
dd 0xFFFFFFFF, 0.5, -0.5, 0.5
dd 0xFFFFFFFF, 0.5, -0.5, -0.5
dd 0xFFFFFFFF, 0.5, 0.5, -0.5
dd 0xFFFFFFFF, -0.5, 0.5, 0.5
dd 0xFFFFFFFF, 0.5, 0.5, 0.5
dd 0xFFFFFFFF, -0.5, 0.5, 0.5
dd 0xFFFFFFFF, 0.5, 0.5, -0.5
dd 0xFFFFFFFF, -0.5, 0.5, -0.5
dd 0xFFFFFFFF, -0.5, -0.5, -0.5
dd 0xFFFFFFFF, -0.5, 0.5, 0.5
dd 0xFFFFFFFF, -0.5, 0.5, -0.5
dd 0xFFFFFFFF, -0.5, 0.5, 0.5
dd 0xFFFFFFFF, -0.5, -0.5, -0.5
dd 0xFFFFFFFF, -0.5, -0.5, 0.5
.chunk_end:
model:
.magic db "tMH "
.size dd .chunk_end - .chunk_start
.chunk_start:
.name db "Model 0 "
.numParts dw 1
.flags dw 0
.pos db 0x00, 0x00, 0x00, 0x00 ; ??? - copied from cars3.trim
db 0x66, 0x66, 0xC6, 0xBF
db 0x00, 0x00, 0x00, 0x00
db 0x00, 0x00, 0x00, 0x00
.rot db 0x00, 0x00, 0x00, 0x00 ; ???
db 0x00, 0x00, 0x00, 0x00
db 0x00, 0x00, 0x00, 0x00
db 0x00, 0x00, 0x00, 0x00
.p0name db "Part 0 "
.p0meshID dw 0
.p0texID dw 0
.p0pos db 0x00, 0x00, 0x00, 0x00 ; ???
db 0x9A, 0x99, 0x19, 0x40
db 0x00, 0x00, 0x00, 0x00
db 0x00, 0x00, 0x00, 0x00
.p0rot db 0x00, 0x00, 0x00, 0x00 ; ???
db 0x00, 0x00, 0x00, 0x00
db 0x00, 0x00, 0x00, 0x00
db 0x00, 0x00, 0x00, 0x00
.chunk_end:
end_of_file:
.magic db "tEOF"
.size dd 0
The assembled binary file is available here: test.trim.gz
It seems valid, and judging by the logs that the game outputs, it seems that it parses through it completely fine, but it is not being rendered... well, sort of.
Upon startup, it seems as if the hardware is trying to render the model somehow, as a black square appears for a single frame upon startup (video slowed down 5x):
And if I double the size of the cube, the size of the black square also doubles, so that has to be my model... in some form.
Just to clarify: this is the src/tests/models
demo that has been tweaked slightly so that it doesn't rotate the parts and loads only 1 model from the file. Still works perfectly fine with cars3.trim
, so the code most likely isn't at fault here.
I'm suspecting the vertex format, as my file does not include normals (and I have no clue how to generate one for a single vertex, which this needs) or the fact that there are no textures included in the file. Either way, I'm at a complete loss here.
I have no idea how I could miss this – but if you squint long enough at the final frames of the video, you can actually see the cube, and I can rotate it without problems!
Now that I know that the model is being in fact drawn, the question now is – why is it so faint? It seems as if the hardware is completely ignoring the color value embedded in the model, as it stays the same lighter-than-background color no matter what I change it to.
Here's a screenshot of the cube being drawn against a black background, which makes it much easier to see:
I managed to construct a simple .trim file (a white cube) in NASM (yes, I created a binary file in an assembler, don't judge me):
Haha. I won't. That's awesome to be able to do that :D
Upon startup, it seems as if the hardware is trying to render the model somehow, as a black square appears for a single frame upon startup (video slowed down 5x):
why is it so faint? It seems as if the hardware is completely ignoring the color value embedded in the model, as it stays the same lighter-than-background color no matter what I change it to.
Looks a lot as if the lighting is wrong. IIRC lighting is disabled by default though (https://github.com/albe/openTri/blob/main/src/tri3d.c#L78) so I wonder why it looks that way. The only thing that I think could cause this is the .texID you set to 0 in your model. Unfortunately, 0 is considered a valid textureId (I wouldn't do this again if I'd design the API today) so I think the model rendering tries to bind the non-existing texture 0, which probably points to the framebuffer memory region (0x00000000 is a valid pointer to the start of the vmem) and hence the cube looks the same color as the background. See https://github.com/albe/openTri/blob/main/src/triModel.c#L470 and https://github.com/albe/openTri/blob/main/src/triTexman.c#L287 respectively. So please try to set the texId to -1 in your model and see if that changes something.
I'm suspecting the vertex format, as my file does not include normals (and I have no clue how to generate one for a single vertex, which this needs) or the fact that there are no textures included in the file.
The normals should only play a role when lighting is enabled, which as explained above is disabled by default. In case you want to add normals, the way they're commonly calculated for a single vertex is by calculating the average of all the planes the vertex position is a part of. So in the case of your cube, each vertex would have a normale made up of the average of the three straight planes it connects. For example given v0 = <-0.5, 0.5, 0.5>
in a right-hand coordinate system
n[v0] = (n[p_left] + n[p_top] + n[p_front])/3 = (<-1, 0, 0> + <0,1,0> + <0,0,-1>)/3 = <-1/3, 1/3, -1/3>
The algorithmic problem when generalizing this is for the vertices that connect two triangles on one side of the cube, as they will bias the normale towards that direction (e.g. if the left cube plane is made of two triangles that connect at v0 then n[v0] = (n[p_left1] + n[p_left2] + n[p_top] + n[p_front])/4 = <-2/4, 1/4, -1/4>
). Anyway, for a cube you don't even want that "averaged" = smoothed normale, since each face of the cube should be lit such that there's no color gradient towards the corners. Hence in this case you just take the normale of the (triangle) face this specifc vertex instance is part of, (n[v0] = n[p_left] = <-1, 0, 0>
but with v0_top = <-0.5, 0.5, 0.5>
being part of a top facing triangle n[v0_top] = n[p_top] = <0, 1, 0>
)
More concretely: Given a triangle in the cube made of these vertices (i.e. first one in your model, front facing in a right-hand system)
dd 0xFFFFFFFF, -0.5, 0.5, 0.5
dd 0xFFFFFFFF, 0.5, -0.5, 0.5
dd 0xFFFFFFFF, 0.5, 0.5, 0.5
each of those vertices should have the normale n = (v1-v0)x(v2-v0) / |n| = <0, 0, 1>
PS: To visualize what I am talking about, look at the "hard edge geometry" vs. the "smooth geometry" cubes in this image: https://i.imgur.com/ikZjE2B.png The latter is what you get with averaged/smooth normal calculation. No real cube will look like that when being lit from one side. The former is what you get with the non-smooth normale, which looks a bit more realistic, but very "hard" and "sharp".
Setting texID to -1 made no change, but if 0 is a valid texture ID, let's add a simple 16x16 texture to the model!
mainheader:
.filemagic db "triModel"
.numMeshes dw 1
.numModels dw 1
.numTexs dw 1
.reserved dw 0
image:
.chunkmagic db "tImg"
.chunksize dd .chunk_end - .chunk_start
.chunk_start:
.filename db "blank.tri"
.filenamepad times 64 - (.filenamepad - .filename) db 0
.filemagic db "triImage"
.numFrames dd 1
.reserved dd 0
.format dw 5 ; GU_PSM_T8
.palFormat dw 3 ; GU_PSM_8888
.flags dw 0 ; No compression/swizzling
.numLevels dw 0
.delay dw 0 ; None, use only for animations
.xOffs dw 0
.yOffs dw 0
.reserved2 dw 0
.palette times 256 dd 0xFFFF0000
.width dd 16
.height dd 16
.widthAlign dd 16 ; Must be power of 2
.size dd .chunk_end - .imgdata_start
.imgdata_start:
times 16*16 db 0
.chunk_end:
mesh:
.chunkmagic db "tMhH"
.chunksize dd .chunk_end - .chunk_start
.chunk_start:
.name db "Mesh 0 "
.vertFormat dd 0b110000011 ; GU_TEXTURE_32BITF|GU_VERTEX_32BITF
.numVerts dw (.chunk_end - .verts) / 20
.flags dw 0 ; No compression, Compression, render format << 2 (?)
.vertSize dw 20 ; U, V, X, Y, Z
.texID dw 0
.dataSize dd .chunk_end - .verts
.verts:
dd 0.0, 0.0, -1.0, 1.0, 1.0
dd 0.0, 0.0, 1.0, -1.0, 1.0
dd 0.0, 0.0, 1.0, 1.0, 1.0
...etc... (same as before, except "0xFFFFFFFF" was replaced by "0.0, 0.0" as the texture coordinates)
And would you look at that:
IT WORKS! Well... again, sort of. When it is working and displaying an image, PPSSPP throws very concerning messages in the logs when starting my game while it is loading the model:
In fact, half of the time when starting my "game", PPSSPP either hangs or just straight up segfaults. So there are some memory-related shenanigans going on which I'll have to solve. Do you know what might be causing this? The code is again not very likely at fault here, as it still loads and renders cars3.trim perfectly without issues.
Thank you for the explanation of normales and how their calculation affects lighting, as a novice in programming for 3D hardware, it was very informative, as I will definitely need to implement lighting further on. :)
So I poked around openTri and its model & texture loading process a bit, and found out that the crash occurs somewhere inside the swizzle_upload
function in triTexman.c. Looking at the triTextureImage
function, which is called along the way, it calls swizzle_upload
if the texture is not swizzled, or sceGuCopyImage
if it is.
Considering that my "texture" only consists of a single color, I set the swizzled flag inside the texture to 1, and now, the model always loads perfectly without crashing!
However, PPSSPP still complains about something:
The only remaining difference between my texture and the texture(s) inside cars3.trim was that mine was not compressed. But after cobbling together a small program which compresses the texture using zlib, it now loads without it complaining at all!
It even successfully runs on real hardware now :)
Now, I just have to write some generic model & texture converter (.OBJ seems quite simple to parse) and implement lighting. Again, thank you so much for your help. 👍
That looks like a dreaded null-pointer access, since 0x000000e0.. is not a valid memory address. Valid memory regions are 0x08800000..0x09ffffff for user RAM and 0x04000000..0x041fffff for VMEM. See http://hitmen.c02.at/files/yapspd/psp_doc/chap7.html#sec7.3
found out that the crash occurs somewhere inside the swizzle_upload function in triTexman.c.
Interesting! swizzle_upload
only acts on the addresses it receives as parameters, so even though the error happens there, it is rooted before that.
However, PPSSPP still complains about something:
That just shows that the source address of the texture upload is invalid (null pointer). So that means the tex->data
respectively the img->data
pointer is invalid. So something at this region goes wrong:
https://github.com/albe/openTri/blob/main/src/triModel.c#L271-L275
But the inner allocation of the image data is checked for:
https://github.com/albe/openTri/blob/main/src/triImage.c#L1368-L1370
So if that's the culprit, it should show something in the triErrorLog.txt
file (or I missed a case further down checking the return value properly).
Edit: After looking further and especially since it worked for you with the compression added, I found the bug in the code:
https://github.com/albe/openTri/blob/main/src/triImage.c#L1372 allocates the image data and reads it from the stream. But unless it falls into either of the TRI_IMG_FLAGS_GZIP
or TRI_IMG_FLAGS_RLE
cases, IT DOES NOT ASSIGN the allocated data pointer to img->data
(like e.g. here https://github.com/albe/openTri/blob/main/src/triImage.c#L1392-L1393)!
Guess I never ever even tried it with an uncompressed image back then :D
I quickly committed https://github.com/albe/openTri/commit/7d91791e68f0f7812610455961361a1be3da8e21 but I'm not able to check/verify it at all.
I quickly committed https://github.com/albe/openTri/commit/7d91791e68f0f7812610455961361a1be3da8e21 but I'm not able to check/verify it at all.
I checked it now, this patch did the trick, it now loads uncompressed textures without problems. 👍
(I will of course still use compressed textures, but it's nice to know that the loader is now a bit more robust.)
Today, I've managed to cobble together a simple .trim model converter, which at the moment only supports the input of a single .OBJ file and texture file pair. That's enough to do some prototyping though, so I got a bunch of these monkey heads (Blender preset) spin around with a (crappy) texture applied to them!
Nightmare fuel? Absolutely (and it's going to get worse shortly). But it works, and that's the main thing I'm focused on right now.
This is the following Python code that is responsible for the file output (which should give you some idea about the generated output):
file.write(bytes([0] * 12)) # name, don't care
file.write(dword(0b110000011)) # vertFormat (GU_TEXTURE_32BITF|GU_VERTEX_32BITF)
file.write(word(len(f) * 3)) # numVerts
file.write(word(0)) # flags (no compression, GU_TRIANGLES)
file.write(word(20)) # vertSize (U, V, NX, NY, NZ, X, Y, Z)
file.write(word(0)) # texID
file.write(dword(len(f) * 3 * 20)) # dataSize
for face in f:
for vert in face:
file.write(struct.pack('f', vt[vert[1] - 1][0])) # U
file.write(struct.pack('f', vt[vert[1] - 1][1])) # V
file.write(struct.pack('f', v[vert[0] - 1][0])) # X
file.write(struct.pack('f', v[vert[0] - 1][1])) # Y
file.write(struct.pack('f', v[vert[0] - 1][2])) # Z
Now, you'll notice that the scene lacks any kind of lighting, so I wanted to add normal vectors to the output. Only the following pieces of the model file have changed (except for the RIFF header, as the data is now larger):
0b110000011 (GU_TEXTURE_32BITF|GU_VERTEX_32BITF)
to 0b111100011 (GU_TEXTURE_32BITF|GU_NORMAL_32BITF|GU_VERTEX_32BITF)
)And the end of the code was adjusted to add the normal vectors into the vertex data, as .OBJ already stores these:
for face in f:
for vert in face:
file.write(struct.pack('f', vt[vert[1] - 1][0])) # U
file.write(struct.pack('f', vt[vert[1] - 1][1])) # V
file.write(struct.pack('f', vn[vert[2] - 1][0])) # NX
file.write(struct.pack('f', vn[vert[2] - 1][1])) # NY
file.write(struct.pack('f', vn[vert[2] - 1][2])) # NZ
file.write(struct.pack('f', v[vert[0] - 1][0])) # X
file.write(struct.pack('f', v[vert[0] - 1][1])) # Y
file.write(struct.pack('f', v[vert[0] - 1][2])) # Z
Nothing special. The vertex size is correct (each vertex is 32 bytes), and the vertex data format matches what the PSP hardware expects from the vertFormat field (at least that's what triTypes.h claims). And yet:
The code was left unchanged, so the lighting is still disabled. It almost looks like every other polygon of the model is either missing or misplaced (as you can see with that one that is trying to go to infinity and beyond). And I doubt that the normal vectors are at fault here, as I've also tried setting all normal vectors to 0 without changes:
for face in f:
for vert in face:
file.write(struct.pack('f', vt[vert[1] - 1][0])) # U
file.write(struct.pack('f', vt[vert[1] - 1][1])) # V
file.write(struct.pack('f', float(0))) # NX
file.write(struct.pack('f', float(0))) # NY
file.write(struct.pack('f', float(0))) # NZ
file.write(struct.pack('f', v[vert[0] - 1][0])) # X
file.write(struct.pack('f', v[vert[0] - 1][1])) # Y
file.write(struct.pack('f', v[vert[0] - 1][2])) # Z
It almost seems like the combination of GU_TEXTURE_32BITF|GU_NORMAL_32BITF|GU_VERTEX_32BITF
is invalid for some reason, or the following definition that is provided in triTypes.h:
#define TRI_VERTUVN_FORMAT (GU_NORMAL_32BITF|GU_TEXTURE_32BITF|GU_VERTEX_32BITF)
/**
* Vertex with texture coordinates and normale
*/
typedef struct triVertUVN
{
triFloat u, v;
triFloat nx, ny, nz;
triFloat x, y, z;
} triVertUVN, triVertUVNf; // 32 bytes
...is somehow incorrect. Either way, this is all very strange. I could also try the UVCN format (with color information, this format is what cars3.trim also appears to use) and see what happens with that.
tbh. I have no real clue what's going on there. It looks a lot as if the PSP reads the vertices wrongly, as if the vertex data didn't align to what it expects by a few bytes and wraps around correctly every other vertex. FWIW even the sample geometries in the sdk only make use of UVCN (TCNP - texture color normal position) vertices for unknown reason: https://github.com/albe/pspsdk/blob/master/src/samples/gu/common/geometry.h Anyway, if you need some help with setting more complex lights up, there are also samples with lighting enabled in the pspsdk. I think I never really made that into a proper API for triEngine, hence why there's only the sceGu calls in the modeltest code
Alright, so I added a color value into each vertex, making each one 36 bytes long (the same as in cars3.trim):
I mean, it's still wrong, but at least it isn't stretched into infinity anymore (and no, the normal vector's values do not matter, changing them to anything causes no change in the rendered output). For fun, I tried changing the color to 0x80FFFF00 (transparent-ish green & blue mask), and the color info is definitely being parsed correctly:
The only thing that separates my vertex data and the one from cars3.trim is, as was previously, that mine is not compressed. I'm not so hopeful about that this time though, as it is clearly visible that at least some of the triangles are being drawn correctly (with the correct texture mapping & color). But I'll try compressing the vertex data tomorrow anyway to see if anything happens.
Thank you for the lighting examples, didn't know about these! They'll definitely come in handy later (when I get the geometry with normals to draw correctly).
One thing you can try: Do a sceGuDisable(GU_CULL_FACE);
before the rendering. If the triangles all show correctly then, it's just due to the vertex order of some of the triangles.
Basically, culling is an optimization to prevent drawing the "backside" of a geometry, by defining if "backside-facing" triangles are clockwise or counter-clockwise ordered vertices. By default triEngine expects clockwise to be front-facing and counter-clockwise to be back-facing. See https://github.com/albe/openTri/blob/0349acc7158e00acf474d19e902904b26df904eb/src/tri3d.c#L79
What speaks against that theory is that your monkey face already rendered correctly before without the normals
Sadly, all this does is that it shows the already present triangles from both sides (as expected). The missing triangles are just missing.
I'll try compression next, as that's the only thing separating me from the cars3.trim's mesh. Perhaps triModel extracts the mesh data into some mysterious, perfectly aligned buffer that the PSP is happy with? 🤔
This is the second time that compression weirdly managed to fix my issues in this thread. It now draws without problems (I enabled half transparency to show that culling is also enabled and working):
And yes, my real, non-emulated PSP seems to be happy with it as well.
So, now that I have normals working, I should hopefully be able to get lighting to work now!
This is the second time that compression weirdly managed to fix my issues in this thread. It now draws without problems
I'm flabbergasted by this tbh :D
Perhaps triModel extracts the mesh data into some mysterious, perfectly aligned buffer that the PSP is happy with?
That could indeed be. The GU is very picky about accessing non-aligned memory. So maybe the reading directly from the file uses a non-aligned memory buffer, while the decompression path fixes that through the newly allocated buffer. Worth taking a look.
Anyway, glad you're making progress! Keep up
Edit: https://github.com/albe/openTri/blob/main/src/triModel.c#L214-L237 both code paths read into the exact same triMalloc allocated buffer for mesh->verts
. So I'm at a loss for explaining the observed behavior difference.
Edit: https://github.com/albe/openTri/blob/main/src/triModel.c#L214-L237 both code paths read into the exact same triMalloc allocated buffer for mesh->verts. So I'm at a loss for explaining the observed behavior difference.
That's... spooky. Either way, there must be some difference for this issue to occur.
On a more lighter note, did somebody say functional lighting? 😏
After a bit of trial and error (at first I thought that the lighting was completely broken, but it was just too dim to see), I now got my 3D scene fully implemented!
Thank you so much for this awesome project! I still have yet to explore all of its features (now mainly interested in testing triAudioLib and triParticle), but so far, it seems very exciting!
BTW, would you be interested in including my model converter directly in this repo (via a PR or something), or should I make it as a separate repo? It's written in Python (as it's great for cobbling things together in a single afternoon, and it's fast enough for this job), if that does not bother you. It takes an .OBJ for geometry and a .PNG texture (or anything that the Pillow library accepts), which must be square and its width (and height, by that matter) must be equal to some power of 2, and it spits out a .trim file, whose texture and mesh are both zlib compressed.
After a bit of trial and error (at first I thought that the lighting was completely broken, but it was just too dim to see), I now got my 3D scene fully implemented!
👏 congrats on the achievement! :)
I still have yet to explore all of its features (now mainly interested in testing triAudioLib and triParticle), but so far, it seems very exciting!
I suggest you start with triWav, as it uses triAudio internally on one channel and doing software mixing of up to 16 wav files. Once you get around with triWav and approach it's limits, you can start digging deeper into triAudio and how to use multiple hardware channels. triAt3 should be straightforward in order to play background music in form of at3 files. Not much to it, you have one file that you can start playing, pause, stop, replay. In theory you could play multiple files at once, but we never considered that usecase.
triParticle you will probably have a LOT of fun with playing around as there's just so many options :) Plus it's all driven by configuration, so you can change the effects of your particle systems by simply editing a textfile without having to recompile. All you really need is triParticleManagerLoadScript()
and run a loop calling triParticleManagerUpdateRender()
with a camera and a delta time since the last frame
BTW, would you be interested in including my model converter directly in this repo (via a PR or something), or should I make it as a separate repo?
Totally up to you if you want make it your own repo where you have full control or add it here - I'll gladly accept a PR. You could as well only create a repo for your tool and I just refer to it in the readme here. Given this engine isn't developed further anymore there's not much difference either way as it's unlikely your tool will diverge from the features of the engine.
Thank you for the tips! I already got triWav working, and it does the job for now, however, in triAt3.h, it states that calling triAt3Init()
requires kernel mode. What exactly does that mean?
Looking forward to playing around with the particle system! I will probably set the particle parameters in code (and perhaps alter them in realtime) though, omitting the external scripts. Looking at the example, it looks really simple to implement, it would be a shame to not have some fun with it :D
Regarding my model & texture converter (they are separate now), I decided in the end to create a separate repository for it, as there are still things that I want to implement (see the README for more info). You can get it here: https://github.com/prochazkaml/openTriConverter
Finally, I'd like to give a huge thank you for leading me on my journey towards making functional 3D software for the PSP. I'm still not there yet, but I'm progressively getting closer and closer, thanks to you and your awesome project. :)
in triAt3.h, it states that calling
triAt3Init()
requires kernel mode. What exactly does that mean?
In the PSP there are two different modes for executing code - user and kernel mode. This comes from the CPU modes https://en.wikipedia.org/wiki/CPU_modes that implement some kind of security wall against allowing arbitrary code to do arbitrary things on the hardware. Some of the PSP provided modules (that allow access to some lower level hardware components like the media engine) can only be accessed in kernel mode. So how do you specify kernel/user mode?: https://github.com/albe/openTri/blob/main/src/tests/audio/atrac3/at3test.c#L10 the second argument in this line you put at the top of every application states which mode you want your application or library to run at. However, you can't simply ask your application to run in kernel mode, so on an unmodified PSP trying to run such an application will just be prevented. On Custom Firmwares this restriction is leviated. However this of course means your application is less compatible. In theory, the right way to handle this case is, that you write a module yourself which you can ask to be loaded in kernel mode, then communicate with that module from your user mode main application/game. However, I'm not entirely sure any more if this hint in the AT3 library is correct (see the test code that also only runs in user mode and still interacts with triAt3 normally). I think there was a bit of confusion going on about which of the PSP provided modules require kernel mode. So just try it as is without thinking about kernel mode until you get an error on one of the system calls (sce*) See also https://pspsdk.fandom.com/wiki/PSP_MODULE_INFO
btw. if you're looking for more resource on knowledge, I can highly recommend looking through the older (2005-2010) threads on https://forums.ps2dev.org/viewforum.php?f=14 - you'll likely also find quite a bunch of my posts.
Looking forward to playing around with the particle system! I will probably set the particle parameters in code (and perhaps alter them in realtime) though, omitting the external scripts. Looking at the example, it looks really simple to implement, it would be a shame to not have some fun with it :D
I think the scripts are super useful for playing around and getting a quick feeling about the system and then when setting up your "base" particle systems. You could also implement controls that change parameters of the particle system and reload it on button presses. You're just limited with the amount of buttons available :)
Regarding my model & texture converter (they are separate now), I decided in the end to create a separate repository for it, as there are still things that I want to implement (see the README for more info). You can get it here: https://github.com/prochazkaml/openTriConverter
Thanks! I added a line and link to it in the root README :)
Finally, I'd like to give a huge thank you for leading me on my journey towards making functional 3D software for the PSP. I'm still not there yet, but I'm progressively getting closer and closer, thanks to you and your awesome project. :)
I want to thank you! It's just great knowing something you developed 15 years ago still is useful to someone :) It was always my mantra to develop code such that others can benefit and learn from it.
I tried compiling the triAt3 example and running it in PPSSPPSDL, but it just throws a bunch of errors (no matter what the selected mode is, it displays these errors and outputs no audio either way):
I'm guessing that I won't have any better luck on real hardware. But that likely won't matter, as I'd like to use more standard audio formats anyway (mainly MP3, as it's good enough).
btw. if you're looking for more resource on knowledge, I can highly recommend looking through the older (2005-2010) threads on https://forums.ps2dev.org/viewforum.php?f=14 - you'll likely also find quite a bunch of my posts.
Thank you, I suspect that this will come in very handy!
And also, thank you for including my converter in the README! :)
Hi! I've stumbled upon your project, and after compiling the "models" test, it's quite impressive work!
I've read in some old forum post that this uses a custom model format tailored specifically for PSP hardware. Other than reading through the entire source code and trying to make the sense of it, is there at least a public specification of this file format? I'd have no problem writing a custom converter for it if it was available (STL is quite simple to parse, it's just ASCII text containing coordinates of vertices - texture mapping will be hell, but I'm not trying to focus on that right now, I'm just want to do something with the library).
I mean, if you already have some model converter, that would be ideal, but a bare file format spec would be enough. Without it, this project is not really of much use other than 2D stuff.