Zatarita

SeT team - progress

56 posts in this topic

I have done the thing.

It is messy, and clunky; however, everything works. I can load and save all the file types h1a using this method; however, I have to go through and fill out all the placeholder functions I made. Nothing too interesting, but progress.
Hex compares the files to be identical when created from scratch vs ones supplied.

My brain hurts though, I think I'ma take a break for the night.

Takka likes this

Specifications:

S3dpak - format - Imeta/ipak - format - Fmeta - format

Programs:

H2a-inflate - SuP

Share this post


Link to post
Share on other sites

Tiddy-bits:

Just a mild update.

There has been a temporary delay for a few reasons.

One: due to personal issues I'm migrating my setup to a new location. Not really "moving" per say, but I'm relocating my primary living location.

Two: I'm waiting on two new things to push this library to the level I'm hoping for.

One of them is a book on compression algorithm. I haven't found a very good LZX library that cooperates with my progressive decompression algorithm. So I decided I'm going to write my own implementation. Xbox uses LZX compression instead of zlib (if I'm not mistaken. Though, the book will definetly clear up which compression is used if not)

I have also decided to mod my 360 so I can try to mod the Xbox version (as emulation doesn't seem feasible for 360 cea yet) these are both taking some time.

I have not abandoned the project by any means!

Don't let the silence be spoopy.

I have accumulated a bookshelf for this project and I've learned a lot.

In fact it's inspired me to go back to school! So maybe in the future things will be even better.

Edited by Zatarita
Takka likes this

Specifications:

S3dpak - format - Imeta/ipak - format - Fmeta - format

Programs:

H2a-inflate - SuP

Share this post


Link to post
Share on other sites

Got a wicked tooth infection. Almost landed me in the ER. Terrifying, this is the first time someone's told me "it's not a matter of if it to kill you, it's a matter of when"

Mildly spooky to say the least ಠ_ಠ

I have surgery today ;-; I have a feeling this will be a fun one.

 

That aside, I've been trying to do what I can when I can.

My book came in on compression algorithms. I'm writing my own LZX implementation for Xbox. Turns out I got the wrong thing for modding my Xboxʕ•ᴥ•ʔ

Once I recover from surgery expenses I think I may just purchase an already modded Xbox 360 instead.

 

Started rewriting the compression algorithm to accommodate for new formats. The current generalized algorithm is build to support Zlib. I think I'm going to pass the compression implementation as a template parameter to keep the compression class as generalized as possible. I'm going to rewrite the decompressesion algorithm as well to utilize virtual functions a bit better. Coming back to it with fresh eyes has highlighted a few short comings I wish to tackle.

 

Beyond this h1a library is done. Just needs polishing. I can extract and rebuild imeta, ipak, and s3dpak (fmeta has been antiquated and removed from the standard) the h2a library will be much easier coming in with the one pck file.

 

Beyond this, I have a feeling I might have to update the UI portion of the program. I have a feeling my programming style has changed so drastically at this point, that some portions may be programmed "wrong" 

 

image0.png

 

Started working on a tool wrapper that enables queuing commands. This is a proof of concept for the "project file" which will allow you to build a map from a build file for both saber and Halo.

 

Also, did some more research into template (the 3d model format used by saber) I have a solid enough understanding I fell to extract data programmatically. But reinjection (or exporting) is not possible yet. One major reason for this is I need to read up on calc. The models use what's called a homogeneous vector too calculate scale and rotation based off of a 4x4 matrix.

Forgive a fool, but I've forgotten matrix math, and thusly can't adjust for size variations stored locally in the matrix in the file. (Or calculate them)


Specifications:

S3dpak - format - Imeta/ipak - format - Fmeta - format

Programs:

H2a-inflate - SuP

Share this post


Link to post
Share on other sites

Yo

So I've been having a very eventful month. Had to get two teeth extracted, been on pain pills. Then had my battery die out in my car. Works letting me get some OT in and I'm trying to take as much of that as I can to catch up on some finances. I'm still working away though. The pain meds make it difficult to focus on in depth things. So I shifted my focus a bit to some tpl reverse engineering. It's frustrating trying to remember what I was thinking. Like trying to solve one of those slide puzzles with a missing piece. Hate them things.

I've made massive strides in a blender importer though!:

unknown.png

There are just a few issues.
1) I cheated a bit in that screen shot. The object is translated using a 4x4 transform matrix. Me, being the astute student I was categorized matrix multiplication as "thing I will likely not need in the foreseeable future." past Andy was wrong.

Spoiler

unknown.pngThis is it's appearance without me fudging the scale

2) There are a few unknowns still that prevent "creation" of a template. In due time things will get easier.

 

I believe the reason the matrix is being used is because the values are unsigned shorts. If we look at an example of a pelican we see each item is designed to take up as much space as possible within the signed short range
unknown.png 

The matrix is supposed to resize, and relocate these parts into the right area.
unknown.png
This is the pelican with each "part" spread from each other. You can see the scale, location, rotation, and skews are partially off.

The tpl format is also similar to the gbxmodel in the sense of it's not a model but a collection of models. Different LODs, Permutations, Muzzle flare, even "sfx geometry"
Also seems like there is a reflection of the node parent/child structure as well. I see often "frame _______" and see a hierarchy linking parents to children.

I'm curious if the anniversary engine uses this to "copy" animations. I haven't seen much info in regards to animations. If the engine links the nodes to the same nodes between engines it would explain how they could "translate" the info. This would also keep things synced between version. This is speculative. There are some noticeable outliers like Keyes walking up the stairs in a10. In classic he just ascends the stairs; however, in anniversary you can see him actually throwing his weight up the stairs. It appears as though he is actually a physical entity with weight inside of a 3d space. In cinematic situations I do feel there is something that must contain "extra" data.


Also for some extra fun here is a bonus picture
unknown.png
Might be kinda hard to make out; however, these are BSP vertices.
I have made some progress reverse engineering the .lg file as well. Managing to extract some useful data. In fact the format is extremely similar. There is one major difference; these faces are broken into streams of indices. I can't seem to find anywhere in the file that specifies the size of these streams, or any form of delimitator telling me when one stream ends and another begins. Essentially, if it was a sentence; I cant find the period at the end, and it's written in another language.

Edited by Zatarita
tarikja and Takka like this

Specifications:

S3dpak - format - Imeta/ipak - format - Fmeta - format

Programs:

H2a-inflate - SuP

Share this post


Link to post
Share on other sites

Alright, I just want to update the thread cause it has been a while.
I've been (attempting) to get a working lzx implementation to use for xbox maps. I gotta say though, the book I'm reading is a bit dense. It's rather hard to absorb what is said in a meaningful way. Because of this I feel I need to read a book on information theory before I continue with attempting to understand compression.
I've read the same chapter like 10 times, and I just dont think the way this author explains things works well for me. Parts of the book make sense though, so I feel it's my comprehension and not the authors' poor dictation. I'm sure there will be one piece that will fall into place, and I'll finally have a cascade of understanding.

Besides this I've been mildly burnt out in general. Last month I had facial surgery, right before my vacation (which I'm currently on, greetings from san fran) So I've been dedicating this time to honing my skills and gaining knowledge. A less active role; however, I'll label it "Research and Development" and stop beating myself up over it.

I finished my books on software architecture, and c++20. I may reproach some things I have identified will come back and bite me later. The EndianStream library may require an update. After thinking about the pros and the cons of the current system I think I may now understand why others take the approach they do with endian-sensitive data . From what I've seen they make the variable aware of the endianness not the stream. After reading the book on software architecture this actually satisfies more quality restrictions than the prior.

For example, having the stream aware of the endianness means the stream is doing two things. Reading the data, and (potentially) swapping the endianness. Comparatively if we make the variable aware of it's own endianness, and the variable is responsible for how it is interpreted; endianness becomes an extension of the interpretation. This reduces coupling. Not only this, but also if I run into the issue of mixed endian files I would have to do some sloppy stream enforced over ride. Which requires a lot more overhead than just declaring the variable as the correct type. This makes both the stream simpler, and the variables less dependent of prior knowledge of anticipated data. This also means I could invert the variables if you will. Meaning I dont need two different classes for files produced from different architectures. say if a Xbox file was big endian, but the PC one is little or w/e

This means I'll need to make some slight adjustments to the libSaber; however, I dont forsee this impacting the implementation. the fix is only really a refactor.

 

Also the libMccCompress has turned into libSaberCompress. The decompression algorithm is becoming a bit more modular with this clean up. Hopefully allowing for generalizing any saber compressed file with a highlevel implementation of the lower level systems.


Beyond that I think I'm pretty happy with this implementation.

It's been a while since I've look at the UI elements, and I've also learned a lot in the meanwhile. There may be changes to that once I try to mesh the two systems together. Though things have been hectic on my end.

I'm seeing jacob collier, madeon, porter robinson though :DDD
So give it like a week and I should start to be able to dedicate more time to the project


Specifications:

S3dpak - format - Imeta/ipak - format - Fmeta - format

Programs:

H2a-inflate - SuP

Share this post


Link to post
Share on other sites

ALRIGHT!
So I've been working silently.
I've been touching up a lot of the underlying systems to get them to the point of functionality I'm happy with.
 

I have completely removed the need for a custom endian class. Instead I have created endian aware variables that will cast for me when needed regardless of parent system. This utilized c++20's std::endian definitions. Which may limit usage. I'm unsure how this will interact with linux's endian library. This will require testing; however, by doing this I can create a custom std::istream >> operator over load DRASTICALLY simplifying the need to create my own parser.

 

I have just finished patching up the decompression object. It is now 100% a template class. Which allows for easy extension. I figured I'd leave out h2am compression though, as that should be associated with blam style compression. This also cleaned up the h2am fringe case code that bloated everything up. I'm not a big fan of boost; however, I am now utilizing the boost logging library for tracing and debugging. I plan to keep boost libraries to a minimum. I also went ahead and borrowed a library from github (https://github.com/bshoshany/thread-pool) for thread pooling. This has cut down decompression time even further showing (for my computer as an 8 core) a ~650% decrease in decompression time. This compounds as I don't need to decompress the entire file. Making "random" data access damn near as efficient as possible. Already compressed chunks wont be redecompressed. I also changed the way things are set up with the decompression object. I allocate an array for the entire decompressed file, and each thread writes to that preallocated memory reducing the amount of runtime allocations needed. The file is read straight from the file into the decompressor into the preallocated memory.

This leaves cleaning up the actual saber definitions for the s3dpak, imeta, and stuff. From there I WILL be done with the libSaber whether I like it or not. I can't fall into the trap of trying to make it "perfect" especially if I keep pushing the bar for what I want. Threading is there, Random access is there, I dont need to over engineer anymore.

Takka likes this

Specifications:

S3dpak - format - Imeta/ipak - format - Fmeta - format

Programs:

H2a-inflate - SuP

Share this post


Link to post
Share on other sites
  • Recently Browsing   0 members

    No registered users viewing this page.