Jump to content

Josh

Staff
  • Posts

    23,321
  • Joined

  • Last visited

Everything posted by Josh

  1. If you have a panel with a lot of sub-objects, like the object properties panel in Leadwerks, which has a ton of detailed controls for different entity properties, it can slow down the speed of your redraws when the window or side panel is resized. This is because when a widget is resized, all of its children have to be adjusted, even if they are hidden. An easy way to improve this is to set the widget layout of the panel to (1,0,1,0) when it is hidden: panel->SetHidden(true); panel->SetLayout(1,0,1,0); This locks it to the top-right so it doesn't get resized when the parent resizes, and all of its children can safely be skipped in the layout update routine, since its size does not get modified when the parent resizes. When you show the panel, do it like this: auto parent = panel->GetParent(); auto sz = parent->ClientSize(); panel->SetShape(0,0,sz.x,sz.y); panel->SetLayout(1,1,1,1); panel->SetHidden(false); When I did this with the Ultra Engine editor side tabber panels, the side panel resizing got noticeably faster. Another trick is to set a panel's background alpha to 0 if it's the same color as the underlying surface, and background rendering for that widget will be skipped: panel->SetColor(0,0,0,0); This can make redrawing a little bit faster, especially if you have a lot of overlapping panels.
  2. Updated 1.0.1 Fixed crash on exit mentioned here. May prevent possible crash from occurring when rendering some widgets for the first time in 3D interfaces. TextField widgets no longer respond to up and down keys, just right and left. This is because I plan to use the up and down keys to scroll through previous entries in the editor console, like a command prompt. Added check to prevent windows from being sent into infinite event loops.
  3. Actually, I take that back. I already programmed it with multi-line support: #include "UltraEngine.h" using namespace UltraEngine; int main(int argc, const char* argv[]) { //Get the displays auto displays = GetDisplays(); //Create a window auto window = CreateWindow("Ultra Engine", 0, 0, 800, 600, displays[0]); //Create User Interface auto ui = CreateInterface(window); //Create widget auto label1 = CreateLabel("First line\nSecond line\nThird line", 20, 20, 120, 60, ui->root, LABEL_BORDER | LABEL_CENTER | LABEL_MIDDLE ); while (window->Closed() == false) { WaitEvent(); } return 0; } However, when a 3D GUI is in use the line return is ignored: #include "UltraEngine.h" using namespace UltraEngine; int main(int argc, const char* argv[]) { auto displays = GetDisplays(); auto window = CreateWindow("Ultra Engine", 0, 0, 800, 600, displays[0]); auto framebuffer = CreateFramebuffer(window); auto world = CreateWorld(); auto camera = CreateCamera(world, PROJECTION_ORTHOGRAPHIC); camera->SetPosition(float(framebuffer->size.x) * 0.5f, float(framebuffer->size.y) * 0.5f, 0); auto font = LoadFont("Fonts/arial.ttf"); auto ui = CreateInterface(world, font, framebuffer->size); auto label = CreateLabel("First line\nSecond line\nThird line", 20, 20, 120, 60, ui->root, LABEL_BORDER | LABEL_CENTER | LABEL_MIDDLE ); while (window->Closed() == false) { world->Update(); world->Render(framebuffer); } return 0; } So I need to adjust the sprite creation code to handle multi-line text and then everything should work the same in Vulkan.
  4. PBR is in use by default. The default roughness is 1.0 (maximum) and the default metalness is 0.0 (minimum). If a metal/roughness texture is in use it will be used to multiply the metal / roughness values. Metal/roughness textures are the same as in the glTF format, with the green channel storing roughness and the blue channel storing metallic: https://registry.khronos.org/glTF/specs/2.0/glTF-2.0.html#metallic-roughness-material Finally, PBR lighting relies a lot on reflections. To get the correct look you must apply a diffuse and specular environment map to the world: https://www.ultraengine.com/learn/World_SetEnvironmentMap?lang=cpp The specular texture can also be used as the world background, or for a more abstract look you can use the diffuse texture as the background, as I did here: https://www.ultraengine.com/community/gallery/image/2548-asset-editor-wip/ These textures can be generated from an HDRI image with the Khronos IBL sampler tool, which is included in the Tools folder. This outputs KTX2 files, which can be loaded with the KTX2 plugin, or converted to DDS format with this code: https://www.ultraengine.com/community/blogs/entry/2780-building-a-single-file-4k-hdr-skybox-with-bc6-compression/?tab=comments#comment-14747 There are some ready-made environment maps in DDS format here you can also use: https://github.com/UltraEngine/Assets/tree/main/Materials/Environment
  5. This is the intended behavior. The TextArea widget is intended for displaying multi-line text. It might be possible in the future for labels to support multiple lines, but it needs to be done carefully because labels have a lot of alignment options in their style flags.
  6. You can create a new class derived from the object class like this: class Viewport : public Object { public: shared_ptr<Panel> panel; shared_ptr<Window> window; bool dirty; Viewport() { dirty = false; } }; auto viewport = make_shared<Viewport>(); This class can be cast to an Object, so it can be supplied in the extra parameter of ListenEvent().
  7. Okee-dokee, I have updated the documentation with another example that shows how to create an event-driven desktop application with a 3D viewport embedded in the interface:. See the third example here: https://www.ultraengine.com/learn/CreateInterface?lang=cpp Modal Loops ➰ When a window is resized, the system goes into a loop and doesn't come out of that loop until the mouse is released. This is a "blocking" or modal loop. Windows, Linux, and Mac all work like this. It's an OS-level thing, and I think it must be done to make window resizing as snappy and responsive as possible. If you are using ListenEvent you will intercept WINDOWSIZE events during the loop, but if you are using WaitEvent() you won't see anything until the mouse is let go. Therefore, in order to update the viewport window as the window is being resized, you must use an event listener. When to Render ⏰ In my OpenGL example linked to above I redraw the viewport every time a WINDOWSIZE event occurs, inside the system loop. This is probably not a good idea with Vulkan, because every time the viewport changes size it has to allocate a new color and depth buffer. (OpenGL might also do this, but if so it is hidden from the user.) It's better to resize the viewport dynamically, but only re-render the scene once the resize operation is done, the modal loop exits, and the program execution goes back to the main loop (when the WaitEvent call finishes). If you just resize the viewport without rendering again, the appearance will vary depending on the driver / manufacturer of your GPU. Nvidia drivers will stretch the image and Intel drivers will leave the image as-is, with a "hall of mirrors" effect if the window is made bigger. Either of these behaviors is fine. Minimizing Renders 📷 When you do resize a window, you will see a bunch of WINDOWPAINT events coming out of WaitEvent(), one for each mouse movement you made. If you just render the viewport every time a WINDOWPAINT event occurs, it will make your program unresponsive for a moment after you resize the window, because it has to render the viewport 100 times for no good reason. To prevent this from happening, the example keeps track of the viewport state with the "dirty" variable. If a WINDOWPAINT event occurs and the viewport has not been invalidated yet, the viewport is invalidated (the "dirty" variable is set to true) and a custom VIEWPORTRENDER event is emitted. Since this event goes to the end of the event queue, it will be evaluated last, after all the remaining WINDOWPAINT events are skipped. The result is the viewport will only render once each time the window is resized. When you run the example, the printed output will show exactly what is happening. A Bug! 🐛 🤯 Finally, this example is producing a crash on exit, so I will take a closer look today and get that fixed for you. Fix is available on branch 1.0.1!
  8. This is definitely doable. Basically, you create a child window with no titlebar and manually set its shape any time the parent window moves or resizes. To make a resizable window with a framebuffer, you also need to call AsyncRender(false) at the very beginning of your program: https://www.ultraengine.com/learn/AsyncRender?lang=cpp This will make it so the rendering all takes place inside the World::Render() call, instead of running on a separate thread. This mode is for event-driven programs instead of games that are constantly rendering. I'll put together an example for you tomorrow, but if you feel like taking a stab at it before then, look at the second example on this page: https://github.com/UltraEngine/Documentation/blob/master/CPP/OpenGL.md That is an old example using OpenGL with Ultra App Kit, but it's very similar to setting up a framebuffer in Ultra Engine. I'll get a new example up tomorrow.
  9. It's using LuaJIT, also.
  10. Sounds good, just remember that you'll need to disable Lua sandboxing in the editor options in order to access the system commands from Lua. Windows 10 has curl built-in now, like Linux, FYI.
  11. It should work fine: https://www.leadwerks.com/learn?page=API-Reference_Object_Leaderboard
  12. Yes, if you're doing tri-planar mapping, then you know the vectors you are using to generate the texture coordinates, so you can use them for tangent/bitangent . These just mean "the horizontal and vertical directions the texture is oriented with in 3D space relative to the mesh". If you are just using the X and Z positions for texcoords, for example, then your tangent and bitangent would just be (1,0,0) and (0,0,1). "Binormal" is sometimes used instead of "bitangent", and accidentally say it sometimes, but it's not the correct term actually.
  13. This is how the engine does it, but if you are procedurally generating the texture coordinates, it's a lot easier to reuse those vectors for the binormal/tangent, instead of reconstructing them from the texcoords: bool Mesh::UpdateTangents() { Vec3 s, t, result; float sgna, sgnb, sgnc, m, r; float v1x,v1y,v1z,v1u,v1v; float v2x,v2y,v2z,v2u,v2v; float v3x,v3y,v3z,v3u,v3v; float x1,x2,y1,y2,z1,z2,s1,s2,t1,t2; int a, b, c, i, d; Vec3 position; Vec2 texcoords; std::vector<Vec3> normals(vertices.size()); for (int v = 0; v < vertices.size(); ++v) { normals[v] = vertices[v].normal; } std::vector<bool> vertexupdated(vertices.size()); if (polygonpoints < 3) return false; auto polys = CountPrimitives(); for (i = 0; i < polys; i++) { a = GetPrimitiveVertex(i,0); b = GetPrimitiveVertex(i,1); c = GetPrimitiveVertex(i,2); d = 0; if (polygonpoints == 4) { d = GetPrimitiveVertex(i, 3); } if ((vertexupdated[a]==false) or (vertexupdated[b]==false) or (vertexupdated[c]==false) or (polygonpoints == 4 and vertexupdated[d] == false)) { //This should never happen, but skip if it does if (a == b or b == c or c == a) continue; v1x = vertices[a].position.x; v1y = vertices[a].position.y; v1z = vertices[a].position.z; v1u = m_vertices[a].texcoords[0]; v1v = m_vertices[a].texcoords[1]; v2x = vertices[b].position.x; v2y = vertices[b].position.y; v2z = vertices[b].position.z; v2u = m_vertices[b].texcoords[0]; v2v = m_vertices[b].texcoords[1]; v3x = vertices[c].position.x; v3y = vertices[c].position.y; v3z = vertices[c].position.z; v3u = m_vertices[c].texcoords[0]; v3v = m_vertices[c].texcoords[1]; x1 = v2x - v1x; x2 = v3x - v1x; y1 = v2y - v1y; y2 = v3y - v1y; z1 = v2z - v1z; z2 = v3z - v1z; s1 = v2u - v1u; s2 = v3u - v1u; t1 = v2v - v1v; t2 = v3v - v1v; m = (s1 * t2 - s2 * t1); if (m == 0.0f) continue; r = 1.0f / m; s.x = (t2*x1-t1*x2) * Sign(r); s.y = (t2*y1-t1*y2)*Sign(r); s.z = (t2*z1-t1*z2)*Sign(r); t.x = (s1*x2-s2*x1) * Sign(r); t.y = (s1*y2-s2*y1)*Sign(r); t.z = (s1*z2-s2*z1)*Sign(r); m = s.Length(); if (m == 0.0f) continue; s /= m; m = t.Length(); if (m == 0.0f) continue; t /= m; s.Cross(normals[a], result); sgna = Sign(t.Dot(result)); s.Cross(normals[b], result); sgnb = Sign(t.Dot(result)); s.Cross(normals[c], result); sgnc = Sign(t.Dot(result)); if (vertexupdated[a]==false) { vertexupdated[a]=true; m_vertices[a].tangent = s; } if (vertexupdated[b]==false) { vertexupdated[b]=true; m_vertices[b].tangent = s; } if (vertexupdated[c]==false) { vertexupdated[c]=true; m_vertices[c].tangent = s; } if (polygonpoints == 4) { if (vertexupdated[d] == false) { vertexupdated[d] = true; m_vertices[d].tangent = s; } } } } Finalize(); return true; }
  14. You can create an interface directly on a window, in which case the system drawing commands will be used (GDI+, Quartz, XRender). This is the way Ultra App Kit works. This provides the most responsive interface, and it also looks good because it is using the system font and text rendering settings (TrueType, etc.). This mode should be used for desktop applications (like the Ultra Engine editor). You can also create an interface that gets rendered in Vulkan. This is considered a "3D interface" because it can be rendered flat on the screen, onto an object in the world (like the interactive panels in the more recent DOOM games), or floating in the air for a VR interface. With this mode, you can now display a texture directly on a panel without using a pixmap. Note that when you do this, you must manually feed events into the interface with the Interface::ProcessEvent method. This allows you do to things like map texture coordinates in the 3D world to GUI screen coordinates. The exception above is my fault, and it will be fixed in the next update. However, that code would not work anyways. When you call Camera::Render it does not render immediately. All it does is store an instruction in a command buffer that tells the rendering camera to increment its number of frames it is supposed to render before it stops rendering. All rendering is on a separate thread, so that command buffer actually doesn't get executed until the next call to World::Render. (Well, it gets added to a stack of command buffers and then triggers a semaphore that tells the rendering thread a new command buffer is ready, the next time it cycles around and is ready for new instructions 🤪) It would be best to set realtime mode to false and just make a call to Render() whenever the camera refreshes. Otherwise it would be very hard to synchronize things with the rendering thread. When you call Camera::Render() you can be sure the camera will render one more time before it stops. The camera doesn't render immediately, but neither does the surface with the texture it is rendering to, so that latency doesn't matter. It's stored in a shared pointer, so there is no need to delete anything at all. Just set the variable to NULL and it will magically disappear. That goes for all objects in Ultra. But in this particular case, I would recommend holding onto the UI camera and just calling Render() whenever you want it refreshed.
  15. You could call Camera::SetRenderTarget with NULL as the argument. If you have any trouble with render synchronization, you can also use a second camera.
  16. Here's something else that might be relevant to what you're doing: https://www.ultraengine.com/learn/Camera_SetRealtime?lang=cpp https://www.ultraengine.com/learn/Camera_Render?lang=cpp By default, a camera constantly renders, but you can also set it to only refresh when it is manually triggered to do so. Since all rendering takes place on a separate thread, and may not match the frequency of the main thread, this provides precise control you wouldn't have just by hiding and showing the camera.
  17. Some nice vector format logos are available here, black and white: https://www.ultraengine.com/company
  18. I sent a description of this issue to AMD along with a sample they can run.
  19. I've now added support for Widget::SetTexture on the 1.0.1 branch: https://www.ultraengine.com/learn/Widget_SetTexture These changes are now available on the 1.0.1 branch. Please reinstall the client app to be able to detect and install the update, as I had to make some small changes to handle multiple branches: https://ultraengine.github.io/files/UltraClient.exe
  20. These changes are now available on the 1.0.1 branch. Please reinstall the client app to be able to detect and install the update: https://ultraengine.github.io/files/UltraClient.exe
  21. This is the very first update for Ultra Engine SDK! This update occurs on the 1.0.1 branch. Here's what I changed: Added Widget::SetTexture method CreateTexture will now return NULL if RGB/BGR format is used, since these are unsupported un Vulkan Widget::SetPixmap will now automatically convert a pixmap to RGBA if it is in a format that cannot be displayed The client application was having some problems detecting updates on the 1.0.1 development branch, and I had to fix some small issues. In order to detect updates on the 1.0.1 branch, please download and install the new build of the client application: https://ultraengine.github.io/files/UltraClient.exe Once you install that, you will be able to smoothly detect and download updates. Note that the 1.0.0 branch is currently the default option. It is considered a "stable" build that will never change. To switch to the current "development" branch (1.0.1), just press the uninstall button, select the new branch you want, and press install.
  22. Actually, I will make it so the Widget::SetPixmap will assign a converted RGBA pixmap automatically, if it is needed. This is already done in the windowed interface, so I don't see a problem, and the pixmap loader will still strictly load the format that actually appears in the file. CreateTexture will return NULL if the RGB/BGR format is specified.
  23. Okay, the pixmap is loading correctly, but since it is in BGR format Vulkan does not like it when you try to create a texture with it. I think technically what I should probably do is add a check in CreateTexture() that rejects unsupported formats, and then the user should have to convert a pixmap like this to RGBA format (using the Convert method). There may be legitimate reasons someone wants to work with an RGB or BGR format pixmaps, so I am reluctant to auto-correct that in the pixmap loader like I did in the texture loader.
  24. Josh

    Alembic

    Here's some more info: http://www.alembic.io/ https://github.com/alembic/alembic It's funny you should ask about that because I have had some inquiries lately from the film industry, and game and movie technology seem to be merging. I'm interested.
  25. Josh

    Alembic

    What is Alembic? I found it, I am looking into it... Lua support will be very good when it is ready. I'm using a new binding library, that works nicely with C++ shared pointers, so you can actually use the Lua garbage collector instead of having to manually delete objects. Debugging Lua works really well in Visual Studio Code: And then finally, Lua scripts will be used to write extensions for the new editor. A script can hook into events so it can replace or modify program behavior, hide or add different parts of the interface, create new windows and panels, etc. It will be very cool to see what people create with it.
×
×
  • Create New...