I want to visualize my own geoms at a specific location. I found out how to do so for primitive geometries. I want to achieve the same for a mesh. Specifically assuming I had an asset <mesh name="test_name"="test_stl.stl" /> and I know the body pose and the local geom pose. I thought the dataid of _mjvGeom corresponds to geom_dataid[geom_id] but as it turns out i have to set the data_id to 2* geom_dataid[geom_id]. Why is that? I have set the data_id I set pos[3] and quat mat[9] of the _mjvGeom to the geoms location in world coordinates (that is body pose * local geom pose). This gave me incorrect results for meshes. I assume this is due to your mesh preprocessing where you translate and rotate the meshes. In the documentation it says: "Center and align the mesh, saving the translational and rotational offsets for subsequent geom-related computations." So where are these offsets saved? I cannot find the relevant field in mjmodel and more importantly: How to choose pos[3] and quat mat[9] of _mjvGeom as a function of the body pose, local geom pose and mesh-offsets, such that the asset(mesh) is visualized at the correct location? struct _mjvGeom // abstract geom { // type info int type; // geom type (mjtGeom) int dataid; // mesh, hfield or plane id; -1: none int objtype; // mujoco object type; mjOBJ_UNKNOWN for decor int objid; // mujoco object id; -1 for decor int category; // visual category int texid; // texture id; -1: no texture int texuniform; // uniform cube mapping int texcoord; // mesh geom has texture coordinates int segid; // segmentation id; -1: not shown // OpenGL info float texrepeat[2]; // texture repetition for 2D mapping float size[3]; // size parameters float pos[3]; // Cartesian position float mat[9]; // Cartesian orientation float rgba[4]; // color and transparency float emission; // emission coef float specular; // specular coef float shininess; // shininess coef float reflectance; // reflectance coef char label[100]; // text label // transparency rendering (set internally) float camdist; // distance to camera (used by sorter) float modelrbound; // geom rbound from model, 0 if not model geom mjtByte transparent; // treat geom as transparent };
Each mesh is uploaded twice to the GPU: the original mesh, and its convex hull. So 2*meshid is the original mesh, 2*meshid+1 is the convex hull if defined. If the convex hull of a given mesh is not computed, the corresponding slot in the array of OpenGL call-list array is unused. The mesh vertex coordinates, as defined in mjModel.mesh_vert, and relative to the geom position and orientation. The local frame is given by mjModel.geom_pos/quat. The global geom/mesh frame at runtime is given by mjData.geom_xpos/xmat. The following caveat is causing your problem: at compile time the mesh vertex coordinates are re-centered at the mesh center of mass, and the entire mesh is rotated in the frame given by the mesh principal axes of inertia. This re-centering/re-alignment mesh transformation is then composed with the respos/quat of each geom referencing the mesh, and this determines the frame for the geom. However the mesh transformation itself is not saved in the compiled model... which is fine for standard use but is inconvenient for what you are trying to do. The easiest way around this is to create a dummy world geom (with collisions and rendering disabled), reference the mesh from it, and then at runtime apply the local transformation given by the dummy mjModel.geom_pos/quat to the world frame where the mesh is supposed to be rendered.