========== Thread: Expected to work, but doesn't User: Michael Wolfson Date: 2015-02-24 [LIST] [*]Support for Octave (right now, it fails to load the MEX file). Note: not a high priority unless users request this capability [/LIST] ----- User: Emo Todorov Date: 2015-03-25 There is no one here using Octave, but we could look into it if it becomes a priority... are MEX files supposed to be binary compatible between MATLAB and Octave? My understanding is that Octave tries to mimic MATLAB but does not always succeed. Our MEX file is unusually elaborate, because it needs to maintain the socket connection to the simulator between calls. This is done via MATLAB's mex-locking mechanism. ----- User: Bitiquinho Date: 2015-07-17 Is there a way to use Gazebo HAPTIX Matlab/Octave API to communicate with MuJoCo-haptix ? At Gazebo website it says: "The HAPTIX Matlab/Octave API is composed of five mex functions: hx_connect(), hx_robot_info(), hx_update, hx_read_sensors and hx_close(). hx_connect() and hx_close() are optional for the Gazebo simulator, but are included for compatibility with MuJoCo." So I suppose there is an intended compatibility, but I don't have any clue of what I have to change to make it work. ----- User: Bitiquinho Date: 2015-07-18 On a second thought, I guess this is just the interface, and internally MuJoCo and Gazebo works differently, right ? Could I ask for a Linux (.so) version of mjhaptix_user{.lib,.dll} library ? I could provide a MEX file for octave with it. ----- User: Emo Todorov Date: 2015-07-19 Correct, the hx_XXX API is the same but the implementation is different (including socket connection, internal message format etc) so you cannot use one communication library with the other simulator. Supporting Linux is not a high priority given that you need a Windows machine to run the simulator itself - in which case you might as well work in Windows and avoid dual machines or virtualization. Once the new API is finalized following feedback from the DARPA performer teams, I can port it to Linux and also open-source the C++ wrapper producing the MEX so you don't have to replicate all the work (it is over 1000 lines of C++ code). However I don't have the time to port/test every intermediate version, and I suspect there will be a few such versions. Can you use Octave on Windows? If that is an option, I can open-source the C++ MEX wrapper now so you can start working on an Octave analog... ----- User: Bitiquinho Date: 2015-07-19 I was able to run the simulator on Linux with Wine flawlessly. Even connected to it from another machine running Windows for a test. So I was hoping even for a Linux precompiled shared library (.so file) to write an Octave MEX just for wrapping the hx_* C/C++ calls (the haptix-comm project repository has a source for this that I modified to support a proper hx_connect call). Despite this, it would be great if your wrapper could be open sourced, even just for Windows. However, if I'm not mistaken, your mjhaptix_user library is 64 bits and there is only a 32 bits Windows installer for Octave. ----- User: Emo Todorov Date: 2015-07-20 Impressive! Did you have to do anything nontrivial or did it just run? The file mjhx.cpp is attached. ----- User: Bitiquinho Date: 2015-07-20 For the simulator ? No, just used a clean Wine Prefix and it runned out of the box. Thanks for the file. ========== Thread: left hand, world API, timing User: Michael Wolfson Date: 2015-02-24 [LIST] [*]Ability to work left-handed (i.e. mount camera on left, rotate markers onto left residuum, and on-screen representation of MPL show up as a southpaw). [*]Agreed-upon API for controlling the environment (e.g. execute tests under control of external software, such as MATLAB). [*]Validation that timing matches expected behavior of limb (i.e. 50 Hz update). [/LIST] ----- User: Emo Todorov Date: 2015-03-22 We will mirror the meshes and create a left-handed model. As for motion capture, the cameras will need to be moved to the left side of the subject, or the hand-tracking body will need to be mounted on the palmar side -- otherwise the markers will get occluded during pronation-supination movements. Re API for controlling the environment, we current have the MuJoCo-specific mjhx_reset function which takes an integer argument specifying a "keyframe". These keyframes are predefined configurations that can be saved in the model (see Settings / Sim dialog). The simulation then resets to the desired keyframe. What other control commands do we need? Re timing validation, the way to do it is the user-side code, because we are using a client-server model where all communications are initiated by the client (i.e. user). In MATLAB, simply type: tic; sensor = hx_update(command); toc ========== Thread: hx_collision User: David Kluger Date: 2015-03-16 We have a feature request for a new MATLAB function to include in the API. Please let me know if adding this function is feasible. Function syntax: boolean = hx_collision(object1,object2) Where object1 and object2 are strings denoting an object name (which would be defined in the .xml file MuJoCo is running to generate the simulation) The boolean returned is 1 if object1 and object2 are in contact with one another and 0 otherwise. Ideally, the argument object1 could be a list of strings, so hx_collision could detect if a group of environment objects defined in object1 is/are contacting object2, and return a list of booleans appropriately which has a length matching the number of objects defined in the object1 list. This function has the potential to be very useful for human experiments where we must time certain tasks, like how long it takes to pick up and move an object to a target in the VRE, for example. ----- User: Emo Todorov Date: 2015-03-17 hx_collision is a good idea, although it is not clear how exactly it should work. One may want more than a Boolean indicating presence or absence of contact. For example, it may be useful to have contact distance (or penetration), contact normal and tangential force, relative velocity in contact space, orientation of the contact normal in the world frame (so that you can tell if object A is on top of object B or the other way around). One option we are considering as part of the future "world API" is to return the complete list of active contacts, where each record indicates the two bodies that are in contact and contains the rest of the contact-related data available to the simulator. User-side code can then filter this list as desired. Would people find this useful, or is it overkill? In general, suggestions about extending the API are very welcome. There is a plan to do it, but it would be better if we did it based on user feedback. ----- User: David Kluger Date: 2015-03-19 [QUOTE="Emo Todorov, post: 20, member: 1"]One option we are considering as part of the future "world API" is to return the complete list of active contacts, where each record indicates the two bodies that are in contact and contains the rest of the contact-related data available to the simulator. User-side code can then filter this list as desired. Would people find this useful, or is it overkill?[/QUOTE] I like this idea and believe it would be useful as long as implementation of this feature does not slow down the API or the VRE update rate. ========== Thread: phantom objects User: David Kluger Date: 2015-03-19 New feature request: add ability to create "phantom" objects in the VRE To my knowledge, all objects in the simulation must have >0 mass and >0 density. While this makes sense for creating objects with which you want to interact in a physically realistic environment, it creates a shortfall for testing our motor decode algorithms within the VRE with human subjects. Ideally, we would like the ability to create translucent "target spheres" that stay in place (i.e. unaffected by gravity) and have no density (i.e. any object can move through it uninterrupted). These spheres would be used as targets for our subjects to try to reach with a specific decoded finger movement with MoCap deactivated. In the same test, other phantom spheres would be placed over the other finger tips at their rest positions. The subject would have to reach the target sphere with a specific finger while keeping the others at their resting position. These spheres could then change color based on whether the finger tips are in their appropriate positions, either in their resting positions or successfully moving to the target. We have used this experiment previously in another VRE to demonstrate the number of hand DOFs we can achieve with our motor decodes. This experiment is fundamentally doable with MuJoCo in its current state by taking advantage of the hx_read_sensors() function. Whether or not a finger is positioned properly can be determined by returning the amount of motor actuation from an hx_read_sensors() call. However, this is not very helpful for the the subject who would have very little visual feedback as to whether their fingers are oriented properly. Currently, the only cues we can give the subject within the VRE are by updating the message via an mjhx_message() call. In a nutshell, we would like the ability to create zero-mass, zero-density phantom objects in the simulations. Combined with my previously-requested collision detection ability, the aforementioned DOF-determining experiment could be carried out in MuJoCo. Even better, we would also like to use the API to change the position and color of these phantom spheres. ----- User: Vikash Kumar Date: 2015-03-21 [QUOTE="David Kluger, post: 23, member: 17"]New feature request: add ability to create "phantom" objects in the VRE To my knowledge, all objects in the simulation must have >0 mass and >0 density. While this makes sense for creating objects with which you want to interact in a physically realistic environment, it creates a shortfall for testing our motor decode algorithms within the VRE with human subjects. Ideally, we would like the ability to create translucent "target spheres" that stay in place (i.e. unaffected by gravity) and have no density (i.e. any object can move through it uninterrupted). These spheres would be used as targets for our subjects to try to reach with a specific decoded finger movement with MoCap deactivated. In the same test, other phantom spheres would be placed over the other finger tips at their rest positions. The subject would have to reach the target sphere with a specific finger while keeping the others at their resting position. These spheres could then change color based on whether the finger tips are in their appropriate positions, either in their resting positions or successfully moving to the target. We have used this experiment previously in another VRE to demonstrate the number of hand DOFs we can achieve with our motor decodes. This experiment is fundamentally doable with MuJoCo in its current state by taking advantage of the hx_read_sensors() function. Whether or not a finger is positioned properly can be determined by returning the amount of motor actuation from an hx_read_sensors() call. However, this is not very helpful for the the subject who would have very little visual feedback as to whether their fingers are oriented properly. Currently, the only cues we can give the subject within the VRE are by updating the message via an mjhx_message() call. In a nutshell, we would like the ability to create zero-mass, zero-density phantom objects in the simulations. Combined with my previously-requested collision detection ability, the aforementioned DOF-determining experiment could be carried out in MuJoCo. Even better, we would also like to use the API to change the position and color of these phantom spheres.[/QUOTE] There are two ways one can go about creating "phantom" objects in VRE [LIST=1] [*]If the phantom object is a shape, then use non colliding GEOMs - GEOM refers to 'geometric shape'. They are used for visualization and contact detection. A geom lives in its parent's body frame. If you need a phantom shape static in the world, attach it to the root (world) body. Make sure to turn off its collisions using <"contype="0" conaffinity="0"> tags. [*]If the phantom object is a point, use SITE. SITEs are mass-less entities used to model points of interest. For example - sensor location, camera location, special points in a body frame like fingertips etc. SITE lives in its parent's body frame. If you need a phantom point static in the world, attach it to the root body. [/LIST] Example: Find attached an example model with two phantom objects. [LIST=1] [*]A phantom green sphere static with respect to the world. [*]A phantom magenta finger tip attached to the distal segment of the index finger. [/LIST] Note: - Change the file extension from .txt to .xml and place it inside the mjhaptix098/model folder next to MPL.xml [ATTACH]4[/ATTACH] ----- User: Emo Todorov Date: 2015-03-22 As Vikash said, the visible elements in MuJoCo are not bodies, but geoms and sites. You can attach them directly to the world body and achieve the desired effect. However we will have to extend the API to allow changing the geom/site position and color at runtime, and obtaining the fingertip positions (so you can compute the fingertip-object distance). If you want MuJoCo to detect when a fingertip is touching an object, this can be done with the present version of the software, by editing the model file as follows: 1. add a geom to the world, say a sphere 2. add a site at the same position but with slightly larger radius (and assign it to one of the site groups that are hidden by default, if you prefer to keep the site invisible) 3. create a new touch sensor referencing the site; this will select all contact points within the site volume and return the summed normal contact force as the sensor reading; 4. make sure collisions between the new geom and the fingertips are enabled; you probably don't want to sense any other objects or hand segments touching this geom, so you need to explicitly list the pair-wise collisions you want to sense. Let me know if you want to use this and I will make an example model for you. The new sensor will automatically appear in the list of touch sensors exposed by the API. But again, you will not be able to change colors or geom positions at runtime until we extend the API. ----- User: David Kluger Date: 2015-03-23 [QUOTE="Emo Todorov, post: 26, member: 1"]Let me know if you want to use this and I will make an example model for you. The new sensor will automatically appear in the list of touch sensors exposed by the API. But again, you will not be able to change colors or geom positions at runtime until we extend the API.[/QUOTE] Yes, please! Thank you. ----- User: David Kluger Date: 2015-03-23 [QUOTE="Emo Todorov, post: 26, member: 1"]If you want MuJoCo to detect when a fingertip is touching an object, this can be done with the present version of the software, by editing the model file as follows: 1. add a geom to the world, say a sphere 2. add a site at the same position but with slightly larger radius (and assign it to one of the site groups that are hidden by default, if you prefer to keep the site invisible) 3. create a new touch sensor referencing the site; this will select all contact points within the site volume and return the summed normal contact force as the sensor reading; 4. make sure collisions between the new geom and the fingertips are enabled; you probably don't want to sense any other objects or hand segments touching this geom, so you need to explicitly list the pair-wise collisions you want to sense.[/QUOTE] I did what you suggested, but there is a dilemma with this approach: the finger becomes obstructed when contact is made with the site-enclosed geom. We would like the option to be able to use the API to detect whether two volumes are overlapped without the need for contact to be made between two geoms. In other words, is it possible to create an additional field within the struct returned from hx_read_sensors() that indicates whether two sites' volumes overlap? ----- User: Emo Todorov Date: 2015-03-25 [QUOTE="David Kluger, post: 28, member: 17"]I did what you suggested, but there is a dilemma with this approach: the finger becomes obstructed when contact is made with the site-enclosed geom. We would like the option to be able to use the API to detect whether two volumes are overlapped without the need for contact to be made between two geoms. In other words, is it possible to create an additional field within the struct returned from hx_read_sensors() that indicates whether two sites' volumes overlap?[/QUOTE] We can of course add fields. However all data structures and functions whose names start with 'hx' are part of a standard API we designed with OSRF and DARPA, and as with most standardization efforts, agreeing took more work than implementing it :) If we were to add MuJoCo-specific functions (starting with 'mjhx') it will happen a lot faster, but that means we cannot touch the standard API, so it will have to be a separate call... In the meantime, here is a solution to your dilemma: make the contact very soft, so that the touch sensor will still detect a non-zero contact force, but its effect on the physics will be negligible. This can done by adjusting solprm, for example: solprm = "1 1 1 1" I still have to document what exactly this does, but roughly speaking, the contact behaves as an implicit mass-spring-damper, with parameters solprm = (mass0, mass1, damping, stiffness). Smaller numbers mean softer contact. If you are specifying contact pairs explicitly, you can include the above attribute there. If you are relying on the automatic mechanism for detecting all pairwise collisions, this needs to be included in both geoms that are contacting (because in that case the solver takes the average of the geom values to determine the contact-specific values). ========== Thread: load keyframe User: Yuval Tassa Date: 2015-02-24 I wish there was some way to load a keyframe from the keyboard... How about "ctrl+backspace"? ----- User: Emo Todorov Date: 2015-06-11 Now that the Pro version leaves the GUI design and keyboard/mouse hooks to the user, you can implement such a shortcut. ========== Thread: touch sensor number User: David Kluger Date: 2015-03-24 I added 2 contact sensors to the API (from the original 19 on the virtual limb) in an attempt to allow the API to return contact forces on 21 contact sensors. Apparently, there is a 20-contact-limit for the API, because attempting to return contact data from an hx_read_sensors() call returned a MATLAB error: "Error using mjhx; Bad data size." This problem did not occur when I removed one of the touch sensors from the .xml file resulting in a total of 20 contact sensors in the simulation. I expected the API to be able to return contact data for however many touch sensors you declare in the .xml simulation file. However, there is a hard upper limit of 20. For our experiments, we may need API-control of over 20 contact sensors. Can MuJoCo's API capability be extended to accommodate more than 20 contact sensors? ----- User: Emo Todorov Date: 2015-03-25 There is a hard limit but it is 32... I suspect you are getting the error for a different reason. The user-side library compares the size of the data arrays returned from the simulator to the last hxRobotInfo it got from a call to hx_robot_info, or from hx_update (which calls hx_robot_info internally). If you connect, and then load a model with different number of elements of any kind, you should call hx_robot_info from the user side before calling the other API functions. The reason for this design is that the user must be aware at all times what model is being simulated, or else bad things are bound to happen. We could of course transmit the constant model info (including sizes) at every update, but that seemed wasteful, thus the present design. ========== Thread: Add enhanced API control over simulation objects User: David Kluger Date: 2015-05-11 We would like the ability to change object colors and transparency via the API while a simulation is running. We would like this ability to be able to reproduce experiments that we have performed previously where we make targets for fingers of the virtual limb and provide feedback to the participant when he/she has their fingers in the correct targets. The feedback can be visual (in the form of changing the target color from red to green) or tactile (in the form of sensory stimulation on the neural interface). I have a video showing the experiment performed in our old VRE if you would like it for reference, but I cannot upload the video here because the forum does not accept .mp4 uploads. ----- User: Emo Todorov Date: 2015-05-12 Just changed the forum settings to allow video files; please try to upload it again. Yes, changing object color and transparency is one of the features we are currently adding to the extended API. ----- User: David Kluger Date: 2015-05-12 Glad to hear color modification is going to be added to the new API. Is there a tentative release date? Unfortunately, the forum will still not let me upload a .mp4 file. What video file types are allowed? ----- User: Emo Todorov Date: 2015-05-16 Strange that you cannot upload a movie... I just did without any problems... can you try again and tell me what error message you are getting if any? The allowed file extensions are: zip txt pdf png jpg jpeg gif c cpp m xml urdf mjb mp4 avi mpeg mkv ----- User: David Kluger Date: 2015-05-17 When trying to upload a .mp4 video using the "Upload a File" button on the bottom right of the reply pane, I get the error message screen shown in the attached file. It reads, "The following error occurred/ The uploaded file does not have an allowed extension./ 'filename'" ----- User: Emo Todorov Date: 2015-05-18 I thought you were talking about uploading Resources... anyway, now I changed the Attachment Upload options as well, and uploaded an mp4 here as a test. The file size limit is 5000 KB. ----- User: David Kluger Date: 2015-05-18 Success! The movie is attached. In this movie, we use the neural interface to decode finger movements from an open fist towards his palm to either a close or far target. The targets are rendered as translucent red spheres and turn green when the fingertips are inside the targets. He is not looking at the screen in this trial, but can use visual feedback for training trials. We stimulate nerve fibers via the neural interface when his fingers are in the target area. When he keeps his fingers in the target for a certain amount of time, he hears a beep and indicates whether he thinks the targets are close or far. There are two things we are doing with virtual objects in this video that we would like the ability to do with MuJoCo: [LIST=1] [*]Change object color based on finger position from the API (which you have indicated will be possible in the next release) [*]Move the target object's position from commands called via the API [/LIST] Being able to move objects via the API also expands our experimental limitations with the VRE. We may want our volunteers to "track" objects with their fingers to further test our closed-loop control methods, i.e. have the target move instead of appear/disappear in specified locations. I hope this clears up some gray areas from my previous requests for expanded API control. ----- User: Emo Todorov Date: 2015-05-25 Nice demo! Re moving objects, MuJoCo has a special type of body called "mocap body". Such bodies are static for simulation purposes, but can be moved dynamically by external code. Currently the base of the hand is such a body. We have code (which is internal to the GUI but external to the simulator itself) that reads the OptiTrack data and moves this mocap body accordingly. The new API will allow getting and setting the positions and orientation of all mocap bodies. In this way you can achieve what you want, plus read the mocap data, and even replace the OptiTrack with an external motion capture system (although this will require some calibration steps which are presently automated). The new API should be released sometime this week. ========== Thread: Convex decomposition does not add concavity to simulation objects User: David Kluger Date: 2015-05-27 I tried installing the HACD library as suggested on the MuJoCo overview. I used the newer V-HACD 2.0 binaries the same developer released. The binaries are able to perform the decomposition into a .stl file, but MuJoCo cannot load them because they are in ASCII format (at least that's what the error message tells me). So I tried using Google SketchUp and added some plugins to perform CDs and export into binary .stl format. MuJoCo can render objects defined by the .stls generated by this method, but there is still no concavity. For example, I added a ring I made using SketchUp and its convex decomposition into the simulation. I placed a cylinder into the simulation inside of the ring. MuJoCo renders the objects just as I would expect and I can see faint lines in the ring where the convex decomposition broke apart the ring. However, once I press play, the ring shoots upwards as if the hole in the middle is solid. Have you found and confirmed a reliable method to generate concave meshes in MuJoCo? Have you tried using the HACD library with any success? ----- User: Emo Todorov Date: 2015-05-28 If the .stl file defines a non-convex mesh, it will be rendered correctly but the collision detector will use its convex hull. The only way to do collisions with a non-convex mesh is to decompose it into a union of convex meshes, put each resulting convex mesh in separate .stl file, and assign each of these files to separate geom. All these geoms should belong to the same body (because you don't want them to move relative to each other). Is this what you are doing? Or are the convex meshes in a single .stl file? If it is the latter, we are back to the situation where a single convex hull is used to represent one non-convex mesh -- which happens to contain multiple pieces that are convex, but MuJoCo doesn't know that, it sees it as a single mesh. If you send me the model or upload it on the forum under Resources I would be happy to take a look. ----- User: David Kluger Date: 2015-05-28 I see. The convex meshes I was describing were in a single .stl file. In the future, for hollow/concave objects that we need in simulations, we will render convex pieces together in the same body with separate .stl files as you suggested. Building and rendering these multi-part meshes will be a bit of a hassle, but 100% doable. Thanks for getting back to me. ----- User: Alireza Ranjbar Date: 2020-03-07 It would be good if Mujoco could support .obj files like PyBullet. Then instead of having to define multiple meshes from multiple stl files assets, all could be defined in one .obj asset. Is there at least a way to automatically generate multiple stl files from one .obj or one .stl file, (and perhaps anything else also that can simplify the modification of the MJCF)? I have a .obj (or stl) file which includes of around 150 convex meshes inside it for which it is rather difficult to if I were to make a separate stl file of each and write an xml file. ----- User: florianw Date: 2020-03-10 I suggest writing a simple bash/python script that goes through the .obj file and makes a system call to the command line interface of meshlab. I remember doing that once for exactly your purpose. ----- User: kracon7 Date: 2020-05-17 [QUOTE="Alireza Ranjbar, post: 5812, member: 2418"]It would be good if Mujoco could support .obj files like PyBullet. Then instead of having to define multiple meshes from multiple stl files assets, all could be defined in one .obj asset. Is there at least a way to automatically generate multiple stl files from one .obj or one .stl file, (and perhaps anything else also that can simplify the modification of the MJCF)? I have a .obj (or stl) file which includes of around 150 convex meshes inside it for which it is rather difficult to if I were to make a separate stl file of each and write an xml file.[/QUOTE] Hi Alireza, Have you figured out how to do this by scripts? I ran into the same situation as you where I had a .obj file containing multiple convex meshes but I'm not sure how to integrate them with mujoco. Thanks in advance! ----- User: Alireza Ranjbar Date: 2020-05-18 [QUOTE="kracon7, post: 5924, member: 1727"]Hi Alireza, Have you figured out how to do this by scripts? I ran into the same situation as you where I had a .obj file containing multiple convex meshes but I'm not sure how to integrate them with mujoco. Thanks in advance![/QUOTE] [USER=1727]@kracon7[/USER] Hi, what I did in the end was writing a script in Blender's python interface to import the obj file and automatically separate and save stl files while writing an xml for them. If you wanted more information about it I'd be happy to help over email or whatsapp: [email]aliresa.r@gmail.com[/email], +49 15 77 37 941 68 ========== Thread: Joint compliance in non-thumb fingers User: David Kluger Date: 2015-05-29 When we try to grasp objects with the virtual limb, we are finding it difficult to pick up and grasp objects. While increasing frictional coefficients helps, we are noticing that the non-thumb fingers do not conform well to the objects it is trying to grasp and objects fall or pop out of the hand as a result. Our hypothesis is that more contact with phalanges will improve gripping, but we are having trouble getting all three phalanges of the fingers to flex and conform around objects. The lack of contact occurs because flexing the fingers causes them to curl so only one segment of the fingers ever contacts the objects at a time (unless the object has the "perfect" shape). The lack of contact is exacerbated by interphalangeal joints not being very compliant. My initial reaction was to increase the "compliance" field as intuition tells me more compliant joints would conform better to objects in the simulation. The modelling overview indicates compliance as one of the joint fields which we can tweak. However, MuJoCo does not seem to recognize this field and throws an error when I add this call to joint declarations in the .xml files. Is this because the MPLs you are modelling the virtual limb after will not be able to have compliant joint motors? Is there another method to increase joint compliance? Is the error MuJoCo returns when I try to change compliance a bug? ----- User: Emo Todorov Date: 2015-05-31 The modelling overview chapter is outdated; the plan is to rewrite it over the summer. For now, look at the XML tab in the "?" dialog; it shows all valid XML elements and their attributes. The "compliance" field is no longer used, so the parser is complaining for good reason. I suspect your intuition is correct and if you increase compliance the fingers will morph around objects and give you more stable grasps. However you need to change the compliance of the equality constraints (used to enforce mechanical coupling) rather than the joints. Look at the section towards the end of the XML. The elements create equality constraints fixing the length of the specified tendon to zero. These "tendons" are simply combinations of joint angles defined in the preceding section (MuJoCo also supports spatial tendons that wrap around objects but we are not using them here). So to increase the compliance of any one of these constraints, replace: with The field solprm changes the properties of the constraint solver (it can also be used for contacts, joint limits and dry friction). The four numbers are M, M, B, K where M is constraint mass, B is constraint damping and K is constraint stiffness. This is a new type of soft constraint model and is not easy to explain in a forum, see this paper for details: Convex and analytically-invertible dynamics with contacts and constraints: Theory and implementation in MuJoCo Todorov E (2014). In [I]International Conference on Robotics and Automation [URL]http://homes.cs.washington.edu/~todorov/papers/TodorovICRA14.pdf[/URL] [/I] Briefly, assuming x is the constraint deformation and f is the constraint force computed by the solver, the dynamics of the constraint deformation are: M * (xddot + B*xdot + K*x) = f This is integrated implicitly so it tends to be very stable, but nevertheless the deformation behaves roughly like a mass-spring-damper (only roughly because the force f computed by the solver is anticipating the constraint). So in a nutshell, smaller values of M, B, K correspond to more compliant constraints. Note that critical damping corresponds to B = 2*sqrt(K), and that M scales B and K. The reason M is repeated twice is because by making the two values different you can enable a soft layer effect (which also requires setting the margin parameters). I am not sure that (10,10, 20, 100) are good values; you can experiment. One problem you may encounter is that we are actuating only one joint and relying on the constraints to move the coupled joints. So if you make the constraints compliant this will effectively reduce the actuation... maybe we should redefine the actuators to act in a more distributed way, affecting all coupled joints directly. ----- User: David Kluger Date: 2015-06-01 [QUOTE="Emo Todorov, post: 45, member: 1"]One problem you may encounter is that we are actuating only one joint and relying on the constraints to move the coupled joints. So if you make the constraints compliant this will effectively reduce the actuation... maybe we should redefine the actuators to act in a more distributed way, affecting all coupled joints directly.[/QUOTE] I tried editing this feature in the .xml. I removed the proximal-intermediate phalanx coupling from the MCP actuator and added a second actuator to control the distal two phalanges for the index finger. The two distal phalanges are still coupled. I opted for this method just by trying to flex my index finger in unique ways. I find it easy to keep my index finger straight and flex from the metacarpal-proximal joint. I find it difficult to keep the intermediate-proximal joint straight and flex the distal-proximal joint. This gives us much more control over finger position and allows us to grasp more complex geometries, but the added DOF to each finger may be difficult to control with neural decodes (this is something we have never attempted before). I will try solprm modifications to see if helps. ----- User: David Kluger Date: 2015-06-05 solprm modifications do not appear to add the compliance we are looking for in the fingers. Is there a way to change how the fingers actuate, possibly editing tendon fields other than solprm, to achieve this goal? ----- User: Emo Todorov Date: 2015-06-11 I agree about added DOFs -- it is not a good idea given the neural decoding context. And besides, the actual devices do not have that many independent motors. The change I had in mind was indeed to edit the actuators so that they act directly on all coupled joints and not just on the base joint. If you also make the equality constrains more compliant (to reduce the coupling) the combined effect should be better passive curling around objects. Currently the actuators act on single joints. To make them act on combinations of joints, define a "fixed tendon" with the desired linear combination of joints, and then define the actuator as acting on this tendon... don't know how doable this is given the incomplete/outdated XML documentation on the website though. ----- User: David Kluger Date: 2015-06-11 [QUOTE="Emo Todorov, post: 52, member: 1"]Currently the actuators act on single joints. To make them act on combinations of joints, define a "fixed tendon" with the desired linear combination of joints, and then define the actuator as acting on this tendon... don't know how doable this is given the incomplete/outdated XML documentation on the website though.[/QUOTE] Any advice on how to accomplish this given the lack of documentation would be greatly appreciated. Thank you. ----- User: Vikash Kumar Date: 2015-06-13 1) Add a fixed tendon over the joints you want it to act over. Use with the "coef" values to choose the moment of the tendon over each joint. [INDENT][INDENT][COLOR=#0000ff] [/COLOR][/INDENT][/INDENT] 2) Replace the actuator to act over the tendon instead of the joint. Use "kp" to choose the overall gain of the position actuator over the tendon. [INDENT][INDENT]Replace [COLOR=#ff0000][/COLOR] with [COLOR=#0000ff] [/COLOR]in the actuator section. {[B]UPDATE[/B]: Replace with [COLOR=#0000b3][/COLOR]} [/INDENT][/INDENT] Above changes were tested with objects of different curvature at different position relative of the palm. Attached figures show representative wrapping achieved. [ATTACH=full]12[/ATTACH] [ATTACH=full]13[/ATTACH] [ATTACH=full]14[/ATTACH] [ATTACH=full]15[/ATTACH] [ATTACH=full]16[/ATTACH] [ATTACH=full]17[/ATTACH] [ATTACH=full]18[/ATTACH] [ATTACH=full]19[/ATTACH] [ATTACH=full]20[/ATTACH] [ATTACH=full]21[/ATTACH] ----- User: David Kluger Date: 2015-06-15 [QUOTE="Vikash, post: 56, member: 7"]1) Add a fixed tendon over the joints you want it to act over. Use with the "coef" values to choose the moment of the tendon over each joint. [INDENT][INDENT][COLOR=#0000ff] [/COLOR][/INDENT][/INDENT] 2) Replace the actuator to act over the tendon instead of the joint. Use "kp" to choose the overall gain of the position actuator over the tendon. [INDENT][INDENT]Replace [COLOR=#ff0000][/COLOR] with [COLOR=#0000ff] [/COLOR]in the actuator section. [/INDENT][/INDENT] Above changes were tested with objects of different curvature at different position relative of the palm. Attached figures show representative wrapping achieved. [ATTACH=full]12[/ATTACH] [ATTACH=full]13[/ATTACH] [ATTACH=full]14[/ATTACH] [ATTACH=full]15[/ATTACH] [ATTACH=full]16[/ATTACH] [ATTACH=full]17[/ATTACH] [ATTACH=full]18[/ATTACH] [ATTACH=full]19[/ATTACH] [ATTACH=full]20[/ATTACH] [ATTACH=full]21[/ATTACH][/QUOTE] The finger compliance with this approach is exactly what we are looking for, but this comes at the expense of maintaining positional control of the finger. Controlling the actuator in the way you suggested causes the fingers to flex/extend until their control limit is reached when the actuator is at any level +/- 0, respectively. We need to have positional control over the fingers AND compliance when an object comes between the finger and palms. Now that I have a better grasp on what the fixed tendon and equality fields do, I will tinker and try to make this happen myself. I will post if I come up with anything. If you come up with a solution in the meantime, I would greatly appreciate hearing about it. ----- User: Vikash Kumar Date: 2015-06-16 Seems like that was caused due to wide control range provided in 'ctrlrange' field of the actuator. Here are revised instructions 1) Add a fixed tendon over the joints you want it to act over. Use "kp" with the "coef" values to choose the moment of the tendon over each joint. [COLOR=#0000b3][/COLOR] [INDENT][COLOR=#0000b3][/COLOR] [INDENT][COLOR=#0000b3] [/COLOR][/INDENT] [COLOR=#0000b3][/COLOR][/INDENT] [COLOR=#0000b3][/COLOR] 2) Replace the actuator to act over the tendon instead of the joint. Use "kp" to choose the overall gain of the position actuator over the tendon. Replace [COLOR=#ff0000][/COLOR] with [COLOR=#0000b3][/COLOR] in the actuator section. ========== Thread: Advice for users who are running MuJoCo on machines without Visual Studio 2013 User: David Kluger Date: 2015-06-05 I have encountered two separate instances where collaborators have tried to install MuJoCo and have been having trouble with getting MATLAB to run the API from the .mex you provide in the downloadable HAPTIX .zip. Assuming their computer meets specs, the .mex is in the MATLAB path, and they are running compatible versions of MATLAB, the underlying problem is a lack of a C++ runtime library on their computer that is normally included with an installation VS2013. The problem can be solved by downloading and installing the 64-bit version of the VS2013 C++ redistributable packages found [URL='https://www.microsoft.com/en-us/download/details.aspx?id=40784']here[/URL]. ----- User: Emo Todorov Date: 2015-06-11 Thanks David! MuJoCo uses static linking of all libraries by default, but apparently the mex compiler prefers dynamic linking... I should look into its options and hopefully find a way to link the mex statically. In the meantime, one can download the runtime library as explained above. ----- User: Emo Todorov Date: 2015-06-18 I found a way to link the mex with the static version of the Visual Studio runtime libraries in the just-released MuJoCo HAPTIX 1.0 RC. There is a mex-compile settings file where one has to replace "/MD" with "/MT". I don't actually have a computer that has MATLAB but no Visual Studio, and haven't tested it, however the mex size increased so I am assuming it worked... ========== Thread: Contacts between geometries User: Feryal Date: 2015-06-09 We are currently using MuJoCo Version 0.5.0 to simulate contacts between multiple objects. In our simulations only a single contact per time step is returned between a pair of geometries, even while they are stacked on top of each other. We were wondering whether there is a way to retrieve all the contact points between geometries at a particular time step? ----- User: Vikash Kumar Date: 2015-06-10 Version 0.0.5 is obsolete. We strongly recommend you to upgrade to latest version of Mujoco available on the downloads page. There is no way for us to reproduce the problem at our end but if you can send your model file, I'll be happy point out anything that's obvious to the eye. Do mention relevant details like - api (source/lib/mex) in use, platform(windows/Linux/Mac) and the way you are retrieving the contacts. ----- User: Emo Todorov Date: 2015-06-11 To be more precise, we recommend downloading the latest version when it becomes available for download :) Right now only the socket-API version is available. Regardless of the version, MuJoCo always uses convex geoms for collision, so you should in general expect a single contact per pair. There are specialized functions that return multiple contacts (box-plane for example can return 4) but the general mesh collider uses the Mink0wski Portal Refinement (MPR) algorithm which returns a single contact. Note that you can attach multiple geoms to the same body. In this way you can model bodies with arbitrary geometry; for example you can decompose a non-convex mesh into a union of convex meshes, load them as separate geoms and attach all of them to the same body. The engine is not hiding/filtering any contacts. mjData.contact contains all the contacts that were detected. The reason you get different contacts at different time steps is because convex geometries stacked on top of each other tend to wobble -- in which case the single contact point can move rapidly. We are considering an improvement to the MPR algorithm that will hopefully address this wobbling issue. But in the meantime, simply attach more than one geom to the same body. ----- User: Feryal Date: 2015-06-16 Thank you for the replies. We also thought about fixing this issue by attaching multiple geometries to the same body. However, we are getting the following error when we try to add more than 70 geometries within a given simulation: " Warning: could not add contact, Jacobian full " We were wondering whether this is a hard limit which cannot be surpassed? ----- User: Vikash Kumar Date: 2015-06-16 There are ways to increase the max jacobian sizes in Mujoco. Export the mjcf schema using the version of Mujoco you have and look for the attribute called 'njmax' under . Please respond with the exported schema and your model file for a detailed response. ========== Thread: MATLAB API: hx_read_sensors() bug User: Jake George Date: 2015-06-19 In version 1.0.0, the MATLAB API function hx_read_sensors() is returning an error: "Bad data size." I am trying to read joint sensor angles, velocities and contact sensor forces. Before version 1.0.0, I was able to use this function to read in the sensors values as a struct. From there, I could easily select the contact values though the contact field, etc. In version 1.0.0 it seems like the only way to access this same data is to use mjhx('get_sensor'). However, the data returned from this function is not indexed in any coherent way, and changes if any fields are added to in the .xml file. For version 1.0.0, is there a way to use the hx_read_sensors() function from the previous version? Or is there a more consistent and intuitive way to read the values of joint and contact sensors? Thanks, Jake ----- User: Emo Todorov Date: 2015-06-20 This is a change that needs to be documented. The standard ("hx_") API was unfortunately designed to have a hidden state, namely the result from the last hx_robot_info call. This is used to determine the correct sizes of all arrays. Previously, this call was made automatically within hx_connect, but that is not a good idea because the user may load a different model without disconnecting/reconnecting. So, call hx_robot_info after hx_connect and again when you load a new model. Reloading the same model does not require this call because the array sizes are the same. ----- User: Jake George Date: 2015-06-22 That did the trick. Thanks for the quick reply. When can we expect the documentation to be updated? ----- User: Emo Todorov Date: 2015-06-23 About a week from now. I may post intermediate versions though. ========== Thread: Mujoco Haptix runs very slow after several minutes of continually updating at 30 Hz User: Suzanne Wendelken Date: 2015-06-23 The Mujoco Haptix program (v0.98) runs very slow and the model needs to be reset after several minutes (10-20 min.) of continually streaming commands at 30 Hz using the Matlab API. ----- User: Emo Todorov Date: 2015-06-23 There is a known bug in the executable itself (independent of the MATLAB API) that causes the timing to go wild after a while, but we have only seen it after an hour or so... Does this happen every time? If you open the Info text panel (lower left) what do you see for FPS, timing / realtime factor, mocap latency? Are you running it on the standard hardware system or something else? Is it running in stereoscopic mode and using the motion capture system? Also, can you try the new version (1.0.0 RC) and see if the problem is still there? ----- User: Suzanne Wendelken Date: 2015-06-23 The slow frames/freezing does not happen every time, only when we run for long periods of time. I tried running 1.0.0 RC for more than 50 minutes. The FPS went down to 49, then eventually disappeared (it read nothing and was pretty much frozen). We are running on the standard hardware system provided. The Mocap field was blank, probably because we were not using the motion capture system at the time. ----- User: Vikash Kumar Date: 2015-06-24 As a short term workaround while we chase this bug, hit reload and everything will be back to normal. ----- User: Suzanne Wendelken Date: 2015-06-24 Thanks for looking into it. In the mean time, we are just reloading, and it does fix the problem. However, we notice that if we accidentally close mujoco while streaming commands from the Matlab API, this causes Matlab to freeze (to the point it has to be killed using task manager). Our work around for that problem is procedural (i.e. we just don't close mujoco while streaming commands from Matlab). ----- User: Vikash Kumar Date: 2015-06-25 Its strongly advised that you exit Mujoco [B]after[/B] you have closed all open client connections (mex/ cpp). Matlab locks mjhx.mex while communicating with Mujoco. To connect to Mujoco, mjhx internally opens a socket connection for necessary handshaking and communications. Its important that the link remains alive until you programmatically close the connection and the mjhx.mex is unlocked. Accidentally closing Mujoco will break the link and results will be similar to what you mentioned above. ----- User: David Kluger Date: 2015-07-17 The newest release of Mujoco appears to have resolved the hanging issue reported here. I ran Mujoco overnight on my computer and the simulation was still running properly. However, I have not seen any indication/release notes indicating that this bug has been fixed. Can the developers confirm/deny if the bug has been addressed? ----- User: Emo Todorov Date: 2015-07-17 Yes, it is fixed. This is mentioned in the Release Notes as "timing bug" - the bug was caused by a timer overflow, which in turn caused the program to slow down intentionally, because it thought that it is too early to take another simulation step. ========== Thread: Python API? User: Matt Date: 2015-06-26 Hi there - Are there any plans to add a python interface in the future? Similar to how the mex file was created, I'm thinking the easiest way would be to add a few wrappers on the cpp file(s) for a shared object library for python? ----- User: Emo Todorov Date: 2015-06-26 There are no plans to add new interfaces to MuJoCo HAPTIX. However users can do that on their end using the C library, as you pointed out. If you want to write such a wrapper, and will find the existing MEX wrapper useful as a starting point, let me know. As for the MuJoCo Pro library, some of the current users have already written wrappers for Python, and hopefully will make them available once the library is publicly released. But the library shared-memory API is very different from the socket API in HAPTIX, so those wrappers are not likely to help here. ----- User: Matt Date: 2015-06-26 Thanks for the reply; I wasn't sure of the API differences between HAPTIX and Pro. Has a release data for Pro been announced? I haven't been able to find one. There are a number of neat publications we're anxious to take a closer look at! ----- User: Emo Todorov Date: 2015-06-27 The Pro version will be released when I find the time to write the documentation. Sometime this summer. The software works fine, but currently the only people who can figure out how to use it are members of my research lab and close collaborators with access to the source code. ----- User: Isaac Myers Date: 2018-06-14 I work on the HAPTIX program with Ripple LLC in Salt Lake City. I also am interested in using Python. The MEX wrapper would be very useful. Are you willing to give enough source to make and receive socket calls directly? Thanks. ========== Thread: Contact Sensor Output User: Jake George Date: 2015-07-02 From the documentation: "The simulator detects all contact points that fall within a given sensor zone and involve the body to which the sensor is attached. The contact normal forces of all detected contact points are then added up (as scalars and not vectors) and the result is returned as the output of the simulated contact sensors. Thus the sensor units are Newtons." Does this mean that we won't be able to distinguish between an indentation of a very sharp probe at high force versus an indentation with a big probe at low force? Since these will produce drastically different neural responses, it is crucial to distinguish between the two. What is the suggested course of action to decipher between the two types of contact? ----- User: Emo Todorov Date: 2015-07-03 The API returns the contact force, regardless of the curvature of the contacting surfaces. Think of it as a one-pixel contact sensor; it has no way of knowing what the underlying object shape is, it just returns the contact force acting on its surface. So if you are holding a probe and pushing against a wall, you should get the net contact force regardless of the shape of the probe. Depending on how the geometry is modeled, a larger probe may generate one contact point or multiple contact points -- in which case the contact force will be distributed among the multiple contacts. If you know where these contacts will be (say at the vertices of a box probe) and put multiple touch sensor zones there, you will know how much force is coming from which vertex. Note however that when meshes are used, MuJoCo generates at most one contact point per geom pair regardless of how large the geoms are; this is the nature of convex collision detection. The new API (just published) gives you the explicit list of contacting geom pairs, so you can get additional information that way. In particular you can tell if one contact is generating a large force or multiple contacts are generating small forces that add up. ========== Thread: Left MPL model User: David Kluger Date: 2015-07-08 Can you please provide a left arm MPL model? We will need this if study participants elect to use their left arm for experiments, which is highly likely. ----- User: Emo Todorov Date: 2015-07-08 We will soon transition to a model of the Luke hand from DEKA (which will be the HAPTIX hardware platform apparently). So the thinking was to focus on that new model and make all further improvements there... but I don't know when we will have all necessary information to construct the new model. When do you think you will need a left-hand version? If necessary we can go through the exercise of flipping the MPL model -- which is rather manual unfortunately. ========== Thread: MuJoCo running slowly due to a large number of objects in a simulation User: David Kluger Date: 2015-07-10 We are trying to create a Box and Block Test (BBT) simulation that has 75 freely jointed bodies in the simulation. The program runs noticeably slower, presumably due to the program having to solve so many contact interactions with the 75 freely jointed blocks touching and knocking into each other. Is there a way to reduce the computational load in order to improve the VRE's performance for a simulation with so many freely jointed objects? See [MEDIA=youtube]jmpNXj5oOo0[/MEDIA] for a demo on the test we are trying to recreate in the VRE. ----- User: Emo Todorov Date: 2015-07-10 The default PGS solver scales as O(N^3) where N is 3*number_of_contacts. There is another solver (called Sparse) which scales as O(N) but it is not fine-tuned yet. You can try it and see what you get (from Settings / Physics / Solver). With your current simulation, what is the CPU time-per-step you are getting in the Info box. Is the system not able to simulate in real-time anymore? If so, have you tried making the simulation timestep slightly larger than the CPU time? ----- User: Vikash Kumar Date: 2015-07-11 Another way to reduce the computational load is to reduce the dimentionality of the contacts, specified using "condim" in the xml. "condim" encodes the nature of the forces a contact generates :- condim=1; => Normal forces condim=3; => Normal forces, Tangential frictional forces condim=4; => Normal forces, Tangential frictional forces, Torsional frictional forces condim=6; => Normal forces, Tangential frictional forces, Torsional frictional forces, Rolling frictional forces. By default condim=3. If you think you can work without frictional forces for some objects (i.e. the objects will not penetrate but they will be smooth), make condim="1" for all such objects. This will give you some speedup. I'll definitely make the box's comdim=1; ----- User: David Kluger Date: 2015-07-13 Note: the simulation is using 40 blocks rights now. It is running off my workstation computer, not the HAPTIX-supplied computer. Workstation specs: Intel Core2 @2.13 GHz with 8GB RAM (clocking out at a stellar 3.3 Windows Experience Score...). While real time factors and CPU timing are faster on the HAPTIX PC, they are still <1. OptiTrack is not connected, so MoCap is not running. Here are the results I am getting when changing the variables you suggested: 1) Timestep=0.002; solver=PGS; condim_box=3; condim_block=3 (default options). [B]Time=0.20x; CPU~8.5ms[/B] 2)[I] Timestep=0.005[/I]; solver=PGS; condim_box=3; condim_block=3. [B]Time=0.50x; CPU~10ms[/B] 3)[I] Timestep>0.005[/I]; solver=PGS; condim_box=3; condim_block=3. [B]Simulation is unstable, blocks fly around the simulation.[/B] 4) Timestep=0.002; [I]solver=Sparse[/I]; condim_box=3; condim_block=3. [B]Time=0.31x; CPU~6ms; Simulation is quasi-stable, blocks jump and jitter, and MPL bobs up and down over the Mocap object.[/B] 5)[I] Timestep=0.005[/I]; [I]solver=Sparse[/I]; condim_box=3; condim_block=3. [B]Simulation is unstable, blocks fly around the simulation.[/B] 6) Timestep=0.002; solver=PGS; [I]condim_box=1; condim_block=1. [/I][B]Time=0.44x; CPU~4ms. [B]Simulation is stable, but blocks are difficult to grab because they are very slippery.[/B][/B] 7)[I] Timestep=0.005[/I]; [I]solver=PGS; condim_box=1; condim_block=1[/I]. [B]Time=0.9x; CPU~5ms. Simulation is stable, but blocks are difficult to grab because they are very slippery.[/B] 8) [I]Timestep=0.005;[/I] solver=PGS; [I]condim_box=1; [/I]condim_block=3[I]. [/I][B]Time=0.22x; CPU~9.5ms[/B] 9) Timestep=0.005; solver=PGS; condim_box=3;[I] condim_block=1 [/I][B][B]Time=0.52x; CPU~10ms. [B]Simulation is stable, but blocks are difficult to grab because they are very slippery.[/B][/B][/B] [B][B][B][/B][/B][/B] In a nutshell, by changing the options that you suggested, I can only get the simulation to run fast enough when the blocks are too slippery to interact with (option-set #7). I have attached the model I built with the block and box test for your reference. The file is using default settings. You will also need the blue wood texture I added (also attached). ----- User: Emo Todorov Date: 2015-07-17 This kind of simulation needs a faster CPU, as well as some fine-tuning of the sparse solver which we haven't done yet, and a dedicated box-box collision function which we have not yet included (presently MuJoCo uses a general convex collision function for box-box, resulting in a single contact point that runs around trying to avoid penetrations everywhere, which in turn reduces stability). In the meantime, would it be possible to use capsules instead of boxes? They are better behaved under collision. Ellipsoids may also be an option. Re condim, the dimensionality of the contact space is defined as the max of the condim values of the two colliding geoms. So if all boxes and the ground have condim=1 while the hand geoms have condim=3 or 4 as in the latest model, only box-box and box-ground interactions should be slippery, while hand-box interactions should be normal... anyway, one shouldn't have to resort to slippery contacts. ----- User: David Kluger Date: 2015-07-20 Re capsules or ellipsoids instead of boxes, this is a workaround we are a bit skeptical to use. We would like to create tests in the VREs using dimensions and geometries that have been widely accepted and validated in the scientific literature. Replacing the blocks for capsules or ellipsoids may be too big of a change for us to credibly relate our results back to those that are already published. However, a "Box and Capsule Test" may still provide valuable results and we will consider using it for testing our own software/hardware. I'll see if this makes a difference, consult the group, and get back to you. ----- User: Emo Todorov Date: 2015-07-26 I was able to get decent real-time simulation with your boxes on a good mobile processor (i7-4720HQ): CPU time around 2.7 msec, simulation timestep 3 msec. On the HAPTIX Xeon system it should be even better. The key is to increase the number of solver iterations for the sparse solver. This greatly improves stability for such complicated models, without making it much slower - because most of the CPU time is spent setting things up before the solver runs. Another issue was that the mocap weld constraint was under-damped (we need to fix this in the official model as well), but the main reason for the MPL bobbing was the insufficient number of solver iterations. Try the following two changes to your XML: