Jump to content

Josh

Staff
  • Posts

    23,315
  • Joined

  • Last visited

Everything posted by Josh

  1. It's not currently supported, but it would not be hard to add: https://www.ultraengine.com/learn/CreateMesh?lang=cpp Can I ask what you want to use it for?
  2. 3ds max project files for testing color precision for Lunar visualization: lunarhdrtest.zip
  3. Comparison of HDR and LDR textures generated from original NASA data, for rendering the Lunar surface.
  4. There will be an update for 1.0.2 later this week with fixes and some new features for image processing.
  5. I looks like is has trouble coming to rest, since the surface is not flat. I suppose no matter which way the cube falls, it's never going to be on a flat surface. In any case, I will check this out. You might also check out the cube sphere, as this will provide a more even distribution of polygons: https://www.ultraengine.com/learn/CreateCubeSphere?lang=cpp
  6. I left this running for several hours and it only got through about 5000 out of 98,000 columns...which means it would take about three days to process! Not good. I tried enabling write caching on my HDD, but it actually runs at the same speed as the USB drive (which Windows says does not allow write caching), after some long initial pauses. I think reading and writing to disk at the same time is probably just a really bad idea, even if they are two different disks. The reason I am doing this is because the image size is irregular and I want to resize the whole thing to power-of-two, which it is very close to. I can't really split the image up into tiles at this resolution because it doesn't divide evenly. Well, maybe this one could be split into 12 x 4 tiles, but other images might not work so well, and it adds another layer of confusion. In order to maintain accuracy I think I will need to implement a Pixmap::Blit method that can optionally accept floating point coordinates that act like texture coordinates. We already have Pixmap::Sample and that will help a lot. That way I can create small tile images in system memory, one at a time, and blit the big pixmap stored in virtual memory onto the tiles, without creating any distortion in the image data. When you finish each tile you save it to a file and then move on to the next area, but you aren't constantly switching between read and write to process each pixel. I'm going to write a blog about this but I want to keep my notes here so I can go back and view it later.
  7. I am working with a 128 GB USB drive now. These sure have come down in price! Strange results when I try resizing a 109440 x 36482 image using StreamBuffers. I am printing the time elapsed for every 1000 pixels that get processed. The routine is doing reads from one file and writes to another. CPU usage dropped to zero and it just hung. The longest pause was one full minute! After that it started going fast again, and keeps buzzing along happily at 1000 pixels every 30 milliseconds: Resizing albedo... 30 38 30 30 30 31 161 65629 1355 1363 1400 1367 1409 1385 9208 16202 1355 634 5705 11285 8836 3867 562 5278 8825 8839 7043 5396 8838 11279 8636 1921 489 63 63 62 64 72 82 2673 293 340 327 322 2774 360 328 357 356 2758 314 306 332 348 2748 309 322 297 333 2785 316 313 308 304 2734 204 29 28 30 29 28 29 29 28 29 31 28 29 29 29 28 29 31 29 29 ...
  8. Because the culling iterates through the hierarchy and stops if shadows are disabled. This is the intended behavior.
  9. Did you enable shadows on the pivot?
  10. This is an interesting situation. The simple answer is you are moving too quickly, or rather accelerating too quickly. Instantaneous teleportation of one meter is not realistic. The more complicated answer is that the engine doesn't know a shadow needs to be updated until the rendering call begins, but at that point it already has the visibility list it is going to use, which does not include the light casting that shadow, since the culling thread didn't include it, since the shadow was not invalidated when the culling process began. Therefore, the renderer has to wait until the next visibility list is received before the shadow gets updated. I suppose a persistent visibility list for each shadow might be a way of dealing with that, but under normal conditions an object will begin accelerating more gradually, which will trigger a shadow update without a big mismatch between the visible geometry and the shadow map info. A persistent list would also have the unwanted side effect of preventing resources from being freed from memory until all shadows are refreshed.
  11. We have plenty of space for images. JPEG is best for screenshots.
  12. Since all physics commands are executed on a separate thread, that would not really work.
  13. The kinematic joint would rotate the entity to whatever rotation you set. The vector joint will still let the entity spin freely around the vector if something hits it.
  14. Does the kinematic joint meet your needs? You can set the force to zero so it only handles rotation.
  15. Here is one possible explanation: You can't use a swarm of bots to artificially raise your ranking on Steam, without spending so much money it would be unprofitable. However, you might be able to use bots to artificially depress the ranking of other products, thereby elevating your own ranking relative to them. The newest game on Steam right now is Winnie the Pooh: The Serial Killer with an app ID of 2273510. That means there are somewhere around two million products on Steam. It would be possible for me to create an automated process that registers new Steam accounts, and browses pages without taking any further action. This would cause the Steam ranking system to downrank those pages, since they are not converting into sales. The bots would be instructed to avoid the pages I want to not damage, so those pages would only have human traffic, and Steam would record they have a higher conversion rate, as a higher percentage of page views would result in a sale. This would result in the products I want to earn money rising in the rankings and earning more cash. I don't know if the numbers really work out, but if they do there would be a strong incentive to set up such a system, and I guarantee you somebody has tried. Something like that seems more believable to me than the story that 99% of people add a free app to their account but then they forget to ever play it.
  16. You could make a wireframe version of the mesh and display it in addition to the polygon version: https://www.ultraengine.com/learn/CreateMesh?lang=cpp
  17. I know you like Bullet, but there's also a Newton 4, which is quite a lot faster. I started implementing it but decided to stick with Newton 3 because I knew I could deliver reliable physics with it. There's also a hidden implementation of Box2D in the engine, but I don't think it is usable yet. If you're interested I can abstract away the physics engine so the end user (you) can insert your own physics library and still use the same engine commands.
  18. In case anyone is curious, I am trying to process this data for lunar terrain mapping: https://imbrium.mit.edu/DATA/ Here's image data for the moon: https://imbrium.mit.edu/DATA/LOLA_GDR/CYLINDRICAL/ And here's the raw laser altimeter data: https://imbrium.mit.edu/DATA/LOLA_RDR/
  19. The fix for this is available in the new 1.0.2 channel.
  20. New channel 1.0.2 Fixed custom widget crash Added new Pixmap::Resize overload Added new CreateStreamBuffer overload Fixed some internal math so Pixmap::WritePixel and ReadPixel can handle very big virtual pixmaps (created with a StreamBuffer)
  21. 1.0.1 is now the default channel. I want to lock in a new version about once a month. 1.0.1 will never change again, and future updates will be available on the next new channel 1.0.2 (which does not exist yet).
  22. Free atmospheric music (attribution required, in the download I looked at):
  23. Besides the obvious driver update, it is also possible to enable Vulkan debugging by adding the validation layer in Ultra.json, but you don't need to do this unless you are just curious about it. I think I know what my next GPU will be.
  24. Yeah, if I randomize the read position the delays go from 40 milliseconds to 4 seconds. The lesson here is that big data processing can only be done on an SSD, not because of the IO speed necessarily but because of the seek speed.
×
×
  • Create New...