TheNexusCity / editaverse

Other
0 stars 1 forks source link

Imported scene throws several errors as currently there is no support for .metaversefiles, so anything other than the .glb model url, throws errors #28

Open kinjalravel opened 2 years ago

kinjalravel commented 2 years ago

image

image

image

image

DavinciDreams commented 2 years ago

So the error is that image

I think adding legacy support for the older format will resolve issue. Below I have added some legacy GLTF converters.

The metaverse file is loading the asset from github pages. To load an asset you upload the glb file in github, set the start url to the glb file, publish to github pages, and presto metaverse file should be supported. Here are docs on assets:

Screenshot System

The Screenshot component is used by the Screenshot.html file; it uses url params as inputs and outputs the screenshot to the url specified.

Usage #

https://app.webaverse.com/screenshot.html?url=https://webaverse.github.io/assets/male.vrm&ext=vrm&type=png Inputs (as Query Params)# url: {URL of the asset that can be downloadable by the screenshot system} [Required] ext: {.vrm | .glb | .vox | .png | .jpg | .jpeg | .gif} [Required] type: {.png | .jpeg} [Required]

Output #

Output screenshot will be posted back to the calling service.

Architecture#

Flow Diagram# image

Location #

Webaverse App └───src └───screenshot.js

Functions#

GLTF/GLB/VRM Loader# Uses: Totum

let  object;
try {
    object = await  metaversefileApi.load(url);
} catch (err) {
    console.warn(err);
}
return  object;

The above code is a common code that applies to GLTF, GLB and loads the asset into context. The loader uses Totum Module that auto detects the extension type and returns the object as a scene app. VRM Loader# Uses: Totum

let  object;
try {
    object = await  metaversefileApi.load(url);
} catch (err) {
    console.warn(err);
}
return  object;

The above code is a common code that applies to VRM and loads the asset into context. The loader uses Totum Module that auto detects the extension type and returns the object as a scene app.

   const _getTailBones = skeleton => {
     const result = [];
     const _recurse = bones => {
       for (let i = 0; i < bones.length; i++) {
         const bone = bones[i];
         if (bone.children.length === 0) {
           if (!result.includes(bone)) {
             result.push(bone);
           }
         } else {
           _recurse(bone.children);
         }
       }
     };
     _recurse(skeleton.bones);
     return result;
   };
   const _findFurthestParentBone = (bone, pred) => {
     let result = null;
     for (; bone; bone = bone.parent) {
       if (pred(bone)) {
         result = bone;
       }
     }
     return result;
   };
   const _countCharacters = (name, regex) => {
     let result = 0;
     for (let i = 0; i < name.length; i++) {
       if (regex.test(name[i])) {
         result++;
       }
     }
     return result;
   };
   const _findEye = (tailBones, left) => {
     const regexp = left ? /l/i : /r/i;
     const eyeBones = tailBones.map(tailBone => {
       const eyeBone = _findFurthestParentBone(tailBone, bone => /eye/i.test(bone.name) && regexp.test(bone.name.replace(/eye/gi, '')));
       if (eyeBone) {
         return eyeBone;
       } else {
         return null;
       }
     }).filter(spec => spec).sort((a, b) => {
       const aName = a.name.replace(/shoulder/gi, '');
       const aLeftBalance = _countCharacters(aName, /l/i) - _countCharacters(aName, /r/i);
       const bName = b.name.replace(/shoulder/gi, '');
       const bLeftBalance = _countCharacters(bName, /l/i) - _countCharacters(bName, /r/i);
       if (!left) {
         return aLeftBalance - bLeftBalance;
       } else {
         return bLeftBalance - aLeftBalance;
       }
     });
     const eyeBone = eyeBones.length > 0 ? eyeBones[0] : null;
     if (eyeBone) {
       return eyeBone;
     } else {
       return null;
     }
   };

The VRM loader is further equipped with the logic to find the eyes of the avatar and center them. It makes use of two important functions _getTailBones & _findEye _getTailBones is called on the skeleton that return the tail bones. _findEye is called on the tail bones returned from the above step.

Totum

Overview This library lets you compile a URL (https://, ethereum://, and more) into a THREE.js app representing it, written against the Metaversefile API.

You can use this library to translate your avatars, models, NFTs, web pages (and more) into a collection of import()-able little web apps that interoperate with each other.

Totum is intended to be driven by a server framework (like vite.js/rollup.js), and game engine client (like Webaverse) to provide a complete immersive world (or metaverse) to the user.

It is easy to define your own data types and token interpretations by writing your own app template. If you would like to support a new file format or Ethereum Token, we would appreciate a PR.

Although this library does not provide game engine facilities, the API is designed to be easy to hook into game engines, and to be easy to drive using AIs like OpenAI's Codex.

Usage

let  object;
try {
    object = await  metaversefileApi.load(url);
} catch (err) {
    console.warn(err);
}
return  object;

Inputs

url: {URL of the asset that can be downloadable by the screenshot system} [Required] Returns Promise: Output Object of application

Supported Assets

VRM VOX JS SCN IMAGE HTML GLB GIF

Motivations

Architecture

Flow Diagram image

Related https://github.com/TheNexusCity/editaverse/issues/26 https://github.com/TheNexusCity/editaverse/issues/1

DavinciDreams commented 2 years ago

legacythree2gltf.js

Converts legacy JSON models (created by the three.js Blender exporter, for THREE.JSONLoader) to glTF 2.0. When original .blend files are available, prefer direct export from Blender 2.80+.

NOTE: JSON files created with .toJSON() use a newer JSON format, which isn't deprecated, and these are still supported by THREE.ObjectLoader. This converter does not support that newer type of JSON file.

Installation:

npm install canvas vblob three Usage:

./legacythree2gltf.js model.json --optimize Known issues:

Creates .gltf files with embedded Data URIs. Optimize to .glb using glTF-Pipeline to reduce file size. Limited support for morph targets (https://github.com/mrdoob/three.js/pull/15011)

DavinciDreams commented 2 years ago

[Blender glTF 2.0 Importer and Exporter](https://nicedoc.io/KhronosGroup/glTF-Blender-IO#blender-gltf-20-importer-and-exporter](https://nicedoc.io/KhronosGroup/glTF-Blender-IO)

Introduction Official Khronos Group Blender glTF 2.0 importer and exporter.

This project contains all features from the previous exporter, and all future development will happen on this repository. In addition, this repository contains a Blender importer, with common Python code shared between exporter and importer for round-trip workflows. New features are included or under development, but usage and menu functionality remain the same.

The shared codebase is organized into common (Blender-independent) and Blender-specific packages: image

Package organisation

This structure allows common code to be reused by third-party Python packages working with the glTF 2.0 format. image Import & export process

The main importer and exporter interface is the Python glTF scene representation. Blender scene data is first extracted and converted into this scene description. This glTF scene description is exported to the final JSON glTF file. Any compression of mesh, animation, or texture data happens here. For import, glTF data is parsed and written into the Python glTF scene description. Any decompression is executed in this step. Using the imported glTF scene tree, the Blender internal scene representation is generated from this information.

Installation The Khronos glTF 2.0 importer and exporter is enabled by default in Blender 2.8 and higher. To reinstall it — for example, when testing recent or upcoming changes — copy the addons/io_scene_gltf2 folder into the scripts/addons/ directory of the Blender installation, then enable it under the Add-ons tab. For additional development documentation, see Debugging.

Debugging Debug with PyCharm NOTE: If you are using Blender 2.80+, you need the updated debugger script Debug with VSCode Continuous Integration Tests Several companies, individuals, and glTF community members contribute to Blender glTF I/O. Functionality is added and bugs are fixed regularly. Because hobbyists and professionals using Blender glTF I/O rely on its stability for their daily work, continuous integration tests are enabled. After each commit or pull request, the following tests are run:

Export Blender scene and validate using the glTF validator Round trip import-export and comparison of glTF validator results These quality-assurance checks improve the reliability of Blender glTF I/O.

CI

Running the Tests Locally To run the tests locally, your system should have a blender executable in the path that launches the desired version of Blender.

The latest version of Yarn should also be installed.

Then, in the tests folder of this repository, run yarn install, followed by yarn run test.

DavinciDreams commented 2 years ago

https://github.com/webaverse/app/blob/7f11f4426652a214821539385a5cf8c734a79bcd/loaders.js

this file contains common file format loaders which are re-used throughout the engine and userspace apps.