Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Raygui – A simple and easy-to-use immediate-mode GUI library (github.com/raysan5)
258 points by maydemir on March 26, 2022 | hide | past | favorite | 110 comments


Related, check out raylib, the rendering/game engine made by the same author!

I’ve used Raylib and raygui on a few RPi based projects and recommend it. It’s a really simple, intuitive way to get an OpenGL-based UI running. Good alternative to web-based UIs because it has similar simplicity but runs on devices with lower specs. I built this pinball machine with raylib: https://youtu.be/iiBn7FVzlcc


Ray is amazing. He's been at it since around 2014. Super friendly, super responsive dude


Brilliant project and execution. I love the idea of bringing the tactile experience together with a digital gaming experience.


This is fantastic, it’s a shame this doesn’t have more views. If it’s okay, I submitted it to the hackaday.com tip line.


Thanks! A little more detail at https://www.chrisdalke.com/projects/mini-pinball-machine/, and the Raylib-based code is on GitHub: https://github.com/chrisdalke/mini-pinball-machine


Raylib is great and one of my preferred libraries for writing emulators in. But I will say if you’re expecting something like imgui or nuklear with Raygui, it’s not nearly as advanced or usable.


Curious what makes Raylib good for emulators? I’m genuinely interested in this.


Ive been making a simple wolfenstein 3d clone with raylib and C, its so easy to jump into even for a complete newbie in C like me!


That's just super cool. I love what you did with the scoreboard!


Before you start picking up a library like this, keep in mind that no immediate-mode GUI library I'm aware of (including this one from what I can see in the source code) has support for assistive technologies such as screen readers.

To them the whole UI is one big black box, so your program is completely useless to anyone with a wide range of disabilities.


The immediate mode Gio (gioui.org) toolkit has support for accessibility (on Android for now), IME and, recently, RTL text layout.

Disclaimer: I'm the author.


I can also report some modest progress on my own work on accessibility of immediate-mode GUIs. I have a branch of the Rust egui library [1] that has basic accessibility on Windows using my AccessKit project [2]. I do have a long way to go to make this fully usable and ready to submit upstream, especially when taking non-Windows platforms into account.

[1]: https://github.com/mwcampbell/egui/tree/accesskit

[2]: https://github.com/AccessKit/accesskit


That’s great, thanks for the letting me know!


pretty awesome lib!


Do operation systems have APIs, so the program can send them what is on the screen? I mean just description like: There is the button with title "Push me" in the rectangle(50, 50, 100, 70), etc.

Or are there any "screen reader" libraries?


Even systems like Qt, where things are drawn using non-native widgets support that (I was long proponent of wxWidgets, because I thought assistive tech is only support only if you use native widgets. The idea of API where you explain what you are showing, and how to interact with it never came to my mind, but with ARIA - like any browser that's the way to go).

Not only that, but I thhink assistive tech should allow you do to easier GUI testing too, but haven't digged into that idea yet (probably folks already are)


Yes, OSes have APIs. Another commenter talked about the Mac one, Linux has AT-SPI and Windows has MSAA (historic) and UIA. If I'm not mistaken, they generally are about representing the UI as a tree of items as this is what the accessibility hardware expects / works best with.


Yes. I don’t know about Windows but I know that for OSX you need to implement the NSAccessibility protocol in AppKit. This gives you a way to describe the hierarchy of the UI and to notify the system of important changes.


There's a10n standards defined by w3c, https://www.w3.org/TR/using-aria/.


Yes they do, at least Apple, Google and Microsoft ones.

On GNU/Linux, I think the only thing that exists in the Gtk+ features that were originally sponsored by Sun.


Aren't you then trying to bridge an immediate mode API (rendering controls) and a retained mode API (assistive tech).


Yes. This has influenced the design of my AccessKit [1] project. With AccessKit, the application (or GUI toolkit) pushes tree updates to the platform adapter, which maintains a complete tree that it uses to implement the platform-specific accessibility API. Each application-supplied tree update can be a full tree or just the nodes that have changed. So an immediate-mode GUI can push a full tree every frame. There just has to be some way of keeping node IDs stable across frames.

[1]: https://github.com/AccessKit/accesskit


I've been using an immediate mode api that generates html elements in web, so that you do get the things that html offers. I basically have a thin wrapper around http://google.github.io/incremental-dom/ (I'm using it from wasm) -- calling it per frame has actually been reasonable perf-wise.


Thanks for making the post I was coming to make :)

While it's popular to mock electron, it does make it easy to make accessible cross-platform apps, whereas most of the "minimal alternatives" are entirely inaccessible.


I think egui (Rust) supports screenreaders


Seeing that it uses char* for strings, I'd also assume it doesn't support unicode and thus non-Latin scripts.


Raygui supports unicode for input and output.

https://github.com/raysan5/raygui/issues/99


I have seen quite often 'immediate mode gui' on HN and elsewhere these days,I rarely do GUI myself, where are these im-gui's use cases? gaming only? or some embedded boards with limited resources can benefit to draw some pixels on a small LCD for a simple GUI(even that people are more likely use some light-weight x11 like libraries instead of im-gui library)?

Other than game-development why is im-gui better than those normal widget kits we are relatively more familiar with: gtk, qt,etc)? or im-gui is specifically for gaming coding?


IMO The main benefit of immediate mode GUI is that (unlike "normal widget kits"), the data required to "drive" the widgets is not actually owned by the widgets themselves, making widgets controlled directly by the "program's" data. You make some sort of change to the program's data and it is immediately reflected on the screen. This is as opposed to the "normal widget kits" where you need some "glue" to reflect those changes for example maybe you need to let the widget know its contents were updated and it needs to redraw or redo its layout.

This makes it easier to create certain types of UIs where the main interaction is to modify some data structure directly, which is how many simple tools behave.


The unfortute truth to these toolkits is that they need to keep a ton of shadow state internally and diff that to the data coming in with each new frame. The most trivial example would be a button. The user redeclares this button each frame as if it was new, but the UI must unify that to a single persistent button instance to get the stateful button behavior right. Every interactive control needs similar logic. In essence you trade tailor-made, performant UI update code with an implicit, complicated and expensive semi-"transparent" generic update logic. It works for simple UI patterns with not too much data in the controls, but anything more heavy will r7n into problems.


That's not at all true, at least for Dear ImGui.

I reimplemented the text rendering in it at one stage (we needed to support arbitrary unicode strings, specifically file names with CJK characters), drilling all the way down to individual OpenGL calls to get it running [1].

Apart from any driver-level caches, there really is no UI state.

[1] For what it's worth, this was an easier undertaking than simply recompiling Qt.


What you're saying is copmpletely wrong. ImGUI is transparently creating structs with internal widget state for every slightly more complex widget on the screen. Look at the state structs in imgui_internal.h to get an idea of what I'm talking about, e.g. ImGuiInputTextState or the huge ImGuiWindow structs. The apparent statelessness of immediate mode GUIs is an illusion that is produced by an extra layer of internal logic that manages that state.


ImGuiWindow is an exception to the rule in that it basically serves as a canvas for other widgets to be drawn on.

Regarding complex widgets like TextInput or Tables having state, I'm completely OK with that. The problem with state in a retained-mode GUI is that it's stored in two places and can easily fall out of sync/create performance-related footguns/create unnecessary work building widgets. Whether or not they're "pure" immediate mode is not the point.


Many people at a certain point in their careers become infatuated with the idea of making a "small", "simple", and "opinionated" component free from legacy "BS". They have the skill to implement the core of that component but lack the wisdom and humility to understand that the "BS" complexity exists for good reasons, that their "small" component is subject to those reasons, and that they haven't actually found a fundamentally better approach that avoids the "BS", but merely skipped the hard parts of software engineering.

You just have to let people go through this phase. You should nod, smile, and compliment them on the economy of their vision. This phase hones skills people need for more serious endeavors. Butt God help you if you actually use one of these "small" systems in production!


Immediate mode is generally used for quick and dirty developer tools. It's rarely used for user-facing UI. You're free to waste your time carefully architecting an MVC application any time you want a few text boxes and sliders to fiddle with something. Microsoft, Google, Nvidia, Valve, Id, Blizzard, Ubisoft will continue to use immediate mode where they deem it useful, in spite of your advice.


Apparently you know the feeling when a competitor comes up and do 80% of what you're doing for half the cost (in dev time, memory weight, etc.). Hell, it still feels unfair even when it's a coworker who gets tasked with redoing (usually using some trendy new stuff) the system you've been maintaining for so long and know by heart, and that you've dreamed to rewrite for an equally long time.

You comfort yourself by saying they will run into the same problems that just wait their time in a dark corners of software or hardware. But they generally are in a much better position, with their smaller and simpler software, to deal with it than with old software.


What your heuristic may be accurate in the general sense, it's absolutely not the case for ImGUIs. I've used the technology on a real commercial product before and it was vastly superior to the old way of doing things.

The "BS" complexity of retained-mode GUIs, as you suggested, is not really BS but the logical result of using a client/server model. That said, for the average desktop application (where all necessary state is local to the user's own machine) this model really is a poor fit and ImGUIs shine.

A lot of the evangelism comes from the fact that many more conservative devs still insist on using the more complex client/server retained-mode paradigm, even when immediate-mode is more suitable to the task at hand.


>Other than game-development why is im-gui better than those normal widget kits we are relatively more familiar with: gtk, qt,etc)?

I was tasked with using both types of frameworks back in my old job.

ImGui eliminates entire classes of bugs related to state synchronisation, and was much easier to make performant because it was overall simpler[1].

It's also much simpler to customise, as widgets are nothing more than functions that draw graphics according to a couple of input variables.

I think a lot of people get confused, as they think ImGuis mean redrawing every asset every frame and therefore performance should be bad. In practice, though, there are driver-side caches and it's absolutely excellent to both work with and use the results of.

[1] While in theory retained-mode GUIs should be as performant as ImGuis or better, in practice it's much easier in retained-mode to accidentally perform an expensive action multiple times in a row in response to a single user interaction. Just tick "select all" on a WordPress page with 100+ elements and you'll see it immediately (well, immediately after several billion cycles have passed anyway!).


The immediate mode gui is largely a myth, IMHO.

The concept of gui fully controlled by program data works well for interfaces consisting of a single button. As soon as we have two buttons, each button has a position, and the positions is the data of the gui itself (retained data), not of the program.

Imagine a multi-tab interface - which tab is currently active and has its controls drawn is the property of the gui (retained data), again.

An action that can be invoked through a menu and can also be added to a toolbar. The toolbar can be expanded or collapsed. Again, that's data of the gui itself (of the view), not of the program (the model).

The currently focused button should be highlighted somehow - the property of the gui again.

Selected elements of a list (they should be highlighted). Expanded/collapsed state of a tree widget nodes. Scroll positions.

So, the immediate mode guis are interesting and somewhat liberating because they demonstrate one can easily build simple ui without complex widget libs, but the results are limited, and very soon stop being really immediate mode. When evolving, they become reinvented wheels of retained mode guis.


All of these functionalities that you have described can b e easily implemented in an immediate-mode UI framework, and in my opinion much easily than retained-mode UI frameworks. (I know this because I have used Dear IMGUI extensively, and have done almost all of the things you’ve said).

I think you’re arguing with some fundamental “philosophical” stance on what data should be owned by the user and what data should be owned by the GUI. But I feel that pragmatically immediate-mode GUIs are the most efficient at creating complex interfaces (like the ones you have described). The parts where IMGUi are lacking is in complex flexbox layouting, theming, and animations, which isn’t required in editor-like applications for professionals but might matter with end-user applications.


>complex flexbox layouting, theming, and animations

The only way to implement those things in an immediate mode framework is to cache a lot of state. I haven't found this to be any easier, once the layout becomes complex enough the amount of information that needs to be cached approaches what it would be in retained mode. Because of this I've also found the performance of immediate mode to be very bad. Especially when it involves:

- large pieces of reflowing text (this means no text editors or word processors)

- very large datasets in list/tree/column/grid widgets (this means no database editors or spreadsheets)

- something like a flexbox, css grid or a constraint-solver based layout (this means no responsive layouts)

These are all nice things that don't fit well with those patterns. Without any caching or keying it's functionally equivalent to having a React layout that diffs the entire tree before every redraw.

The main use cases for immediate mode appears to only be quick prototyping, and making debugging tools for games and simulators. That's not bad by any means, but I've never seen one that is able to efficiently do complex layouts with large datasets.


It's (more or less) just a different approach to describe a UI through code which does away with objects, event handlers or data binding. Very convenient for purely code driven UI development.


Immediate mode guis let you structure your code differently than traditional libraries. (gtk, etc.)

It’s can be such a fast and light way to work. Get this thing done. And move on to the next thing.

To me it’s just a valuable addition to our toolbox and our quest to use the right tool for the right job.


Retained UIs can be faster, because they only run the code related to the widget tree that is actually needed.

It is like in games, everyone ends up implementing their own scene graph, while asserting they don't need the one offered by retained mode APIs.


Yeah, but isn't there a CPU cost to having to run through your code on every frame?


Immediate mode GUIs can be used for almost anything that retained mode GUI toolkits (GTK, Qt) can be used for. It removes a lot of the overhead and boilerplate to speed up development.


The used font is really cruel ... I hope it's interchangeable


It does support changing the font


very cool. there are already immediate mode libs with a focus on tools. what i’m more interested in is really great gui libs that make architecting multimedia or audio applications easy without requiring me to learn and use game engines. ideally on top of opengl(es) and linux (arm) support. i am thinking about applications where you want cooler UI effects and UX than what we are all used to, like what you sometimes see in games. i just don’t want to do all the drawing myself and don’t want to use game engines. stuff like qt is too heavyweight and too much traditional UI focused. most UI frameworks also have little support built in for advanced producer/consumer or multithreading/work load balancing scenarios and expect you to roll all that stuff yourself over and over.


JUCE is the best option. Shopify came out with React Native Skia, which allows you to draw whatever you need. You can also use Skia directly but it's only a drawing library. I wanted to use something like IMGUI but it's hard to theme and not designed for consumer products. This raygui looks cool but it's bitmap fonts which is no good for i18n.

Edit: Looks like it uses real fonts it just includes a crappy one. I'll have to take a closer look at this.


Immediate mode GUIs are usually pretty inefficient. I wouldn't use it as a permanent solution in a product. More like a quick way to add controls to avoid having to close the app and edit a config file, or recompile with different params.

Usually you want app GUIs with a style that matches the rest of the apps. Though mobile kinda changed that. I don't know of a performant UI framework for building non traditional desktop apps.


Cocoa and CoreAnimation on the Mac allow you to apply arbitrary CoreImage filters to UI elements (not just images) and you can even write your own filters using the Metal shading language (though that’s basically doing a lot of the drawing yourself).

Unfortunately this doesn’t work on iOS (probably for power consumption reasons). Apple does have some filters for things like Gaussian blurs, but they are a private API.


Would it be possible to define the GUI with XML, Unity just added this officially and it makes life much better.

Then you do something like rootGUI.Query("elementID").RegisterCallback( e => otherFunc(e));


The joy of immediate mode guis is that you don't have to register callbacks or manually modify (for example) the visibility of elements in response to some action. Instead you do:

    if checkbox(&show_debug, "Show debugging features") {
        if button("Run some test") {
            test()


Joy for ones, pain for others, I don't miss coding GUIs like MS-DOS MODE X or Linux's SVGAlib.


But then if you need to change some non interactive parts of the UI, your digging though source code instead of modifying XML.

Defining the UI via code becomes cumbersome, stuff like adding attributes, etc


Changing attributes of UI elements defined in code vs some XML format is in my mind the same level of effort.

button.SetBackground(<color>)

or

<button backgroundColor="<color>" />

I see no difference there.

EDIT: Ok the main difference I guess would be having to switch files (and thus context) which could be annoying. Plus there's also the non-zero performance impact from having to parse an XML format.


The nice thing about separating the view definition from the controller code is you can work on the view without actually running code. Most retained mode GUIs have standalone view editor. That's essentially not possible in immediate mode guis without a lot more mocking of code.


What if you have stacked elements, or elements within elements.

<Box> <Button> <Icon/> </Button> </Box>

In code it's something weird like

Box(button( Icon() )))


How is:

  Box(
    Button(
      Icon(...)
    )
  )
Weirder than:

  <Box>
    <Button>
      <Icon ... />
    </Button>
  </Box>


Now attach some callbacks.

With XML I'm attaching my callbacks via code while defining the UI elsewhere.

With pure code, I have to attach the callbacks directly to the object.

Theirs a reason you don't just use JavaScript to define React components


You're in the wrong thread. :-)

The main benefit of most immediate mode guis I've seen is that it's trivial to expose internal values to a ui in one line of code. No callbacks, no looking for the right place in any XML file or similar. Instead you add these lines when you need them and they show up in a ui:

   checkbox(&godmode, "Activate Godmode")
   slider(&enemy_aggro, "Enemy aggression", 0, 100)


> Now attach some callbacks.

Not that any of this is relevant to immediate mode GUI, but the actual attachment of callbacks can be done identically in code no matter how the base UI design is done if you want it separately, but if you do it in code (whether with normal syntax or a funky transform like JSX) you can also have the option of direct attachment.

> With pure code, I have to attach the callbacks directly to the object.

No, you don't, but you can, and as it turns out, that's kind of popular.

> Theirs a reason you don't just use JavaScript to define React components

You can, in fact, but even when you use JSX, there is a reason you don't use actual (HT/X)ML code, but instead a language that compiles to, and let's you freely mix in, JavaScript.


Let's say you have a really complex UI.

Would you really define your UI in code over XML or HTML


Yes.

The more complex and more dynamic the UI, the worse the separation gets. The worst of it being when you’re writing code to generate your XML/HTML, and probably introducing a third “templating” language to continue the pretense that HTML/XML is not just standard data structures defined in a different syntax.

The purpose of XML/HTML is to make your GUI specification independent of the programming language itself, and independent of any particular library (anyone can spit out an HTML <button>… it’s quite difficult for C++ to construct a Java Button class).

And there’s some notion that it’s easier for “non-programmers” to edit, without being bogged down programming language details (its “easier” to learn HTML than Java… and as programmers we eat the cost by now having to learn both)


So now I'm assuming, you're approach would be to just have classes for your custom UI elements.

It might just be a Unity thing, but up until they released their XML UI tools, I could not build a decent UI for the life of me. Basic things like getting buttons to appear where I want them to are very hard without XML


It’s ultimately the same; XML largely maps to a class with attributes (and an identifier/navigation through XPATH) fairly strictly.

And it should largely take you the same amount of effort;

    Div ( 
      button(…)
    )
And

    <Div>
       <button>…</button>
    </Div>
is equivalent. Things like separating your definition from your styling isn’t a feature of HTML/CSS, it’s a feature of the API design — the API just happens to be encoded one way or the other. In-line callbacks vs a selector + callbacks is the same as well (you still need a way to navigate the hierarchy, but XPATH & co. makes just as much sense on an object tree as it does on XML — although with code, you could also just keep the pointer around)

I don’t know unity’s API’s, but I’d bet that their GUI API doesn’t actually look much like their XML API; and that if they did, you’d have had just as much, or little, trouble picking it up.


I prefer the best of both worlds. I've been making a YAML like widget syntax in code using an immediate mode GUI written in Nim. It let's me write things like:

    Horizontal:
      box 10.WPerc, 100, 8.Em, 2.Em

      Button:
        text: fmt"Clicked4: {self.count:4d}"
        setup: size 8.Em, 2.Em
        onClick: self.count.inc()

https://github.com/elcritch/fidget/blob/devel/tests/basicwid...


Its entirely dependent on how w/e library your using implements that pattern. For the code I just recently wrote that does a similar thing, its literally just:

button.SetIcon(<icon-path-or-font-glyph>)

box.add(button)

Doesn't really seem all that weird to me.


This is almost by definition incompatible with the immediate mode paradigm and UXML is for UI Toolkit, which is retained-mode.


Wait until you find out about HTML and the document object in Javascript


The evolution of HTML and CSS clearly shows that HTML isn't suited to describe a interface without massive hints to a rendering engine to manipulate it


You say that, but I have a really hard time making any native UI toolkit look how I want it to


There has been a recent explosion of immediate-mode GUIs recently, but I'm skeptical that an immediate-mode solution can cover our GUI needs now and into the future.

High refresh rate screens are a reality so you only have 3-6ms to render each frame, as well as high resolution screens at 4k-8k. That's a lot of pixels to push with a CPU on every frame. As another comment points out we also need accessibility. I would generalize this need as "GUIs need to be reinterpretable", this need is not limited to those with disabilities and such a capability can be useful to anyone that wants to customize, automate, and integrate their digital tools.

Are immediate-mode GUIs up to the task? What is driving their authors to pursue this design? Why aren't they choosing a declarative design?


I don't get your point here. What's the problem with high refresh rates? Games deal with it no problem, and they do way more complicated stuff than a UI with a bunch of buttons and text.

> Are immediate-mode GUIs up to the task?

Yes, there's no reason why imgui's can't do any of that.

> What is driving their authors to pursue this design?

To name a few: Ridiculous bloat and complicated development with things like Qt. No real GUI framework provided by Microsoft apart of their 4-5 half-abandoned offerings.

Also imgui is way simpler to wrap your head around. It's just straight up function calls with no complicated abstractions.

> Why aren't they choosing a declarative design?

Why should it be declarative though? imgui code looks fairly declarative when you read it and it's easy to understand what UI events cause changes


>What is driving their authors to pursue this design?

The fact that it requires an order of magnitude less code to do functionally the same thing as traditional toolkits.

>Why aren't they choosing a declarative design?

Because they aren't trying to build a native, standalone desktop application. Declarative design makes little sense for quick and dirty developer tools that just need to directly view and modify simple data structures.


>That's a lot of pixels to push with a CPU on every frame.

This is a common misconception.

At least with Dear ImGui, all of the actual rendering is performed by the graphics driver, which has a cache.

So, while you may have to queue 100 "render texture" calls every frame, there's no real difference between ImGUI and retained-mode when it comes to the actual frames being rendered.


With intelligent frame pacing you can sacrifice, say, a frame of input latency to sustain high frame rates. See https://raphlinus.github.io/ui/graphics/gpu/2021/10/22/swapc....

> Are immediate-mode GUIs up to the task? What is driving their authors to pursue this design? Why aren't they choosing a declarative design

Immediate mode UIs make managing state much easier, and avoids callbacks for event processing. That alone is worth it in my opinion.


God the notion that declarative design will somehow fix everything and is the end all be all needs to go away. As people grow more experienced with programming they want more control over the flow of their program, so they can focus on fixing the bugs instead of playing the abstraction game to fix the (internal) bugs, if they can at all.

> Are immediate-mode GUIs up to the task?

Enough for companies like Unity/UE4 to sponsor the github projects.


All these pixels have to be pushed every frame even with classic GUI. The only difference is that it is usually done by the operating system.


The advantage of a declarative design is that it can be accelerated by the GPU which has a much better scaling computational architecture to serve this need.


"Immediate mode" just describes the programming model (e.g. how the UI is described with code and how the code reacts to user input), it doesn't tell anything about how rendering happens (or really anything that happens under the hood).


It does imply to some degree that the entire geometry of the UI isn't stored in GPU memory. It's possible to store an entire UI, with text and all, in GPU memory and render it with a single call (e.g. glDrawElements).


...and those GPU buffers need to be rebuilt as soon as anything in the UI changes. An immediate mode UI could just as well track state changes and only rebuild GPU buffers if needed.

Most just don't this (and instead use dynamic buffers which are updated with new data each frame) because the state diffing would complicate the implementation and is usually "not worth it", but they still implement batching within a single frame and reduce the amount of draw calls as much as possible (for instance in Dear ImGui, there's usually one draw call per scroll/clip region so you get away with a handful or at most a few dozen draw calls even for complex UIs).


This doesn't make any sense. GPUs know nothing about declarative or imperative designs. Many immediate mode GUIs output a vertex buffer or a list of draw commands that can then be rendered anyway you choose, including having it rendered by a GPU.


Both declarative and imperative UI frameworks benefit just fine from GPU acceleration, that's orthogonal to the underlying drawing mechanism


Can you give more details on how this GPU acceleration works?


Dear-imgui, the most popular and mature immediate mode GUI library, renders with OpenGL/Vulkan/DirectX.


Does it support RTL languages?

My main problem with DearImgui is lack of RTL language support. I think one of the biggest factors which deter people from using immediate mode GUI is lack of support for many languages.


FWIW, Gio (gioui.org) recently added support for RTL languages and complex scripts. So there's no fundamental reason immediate mode UIs can't have the nice features you're used to from retained mode UIs.


What languages?



I believe they’re referring to languages that read “Right To Left”


Thanks!


Where's the source code? :D I looked under /src and didn't find much. I couldn't find raygui.c and that should be under /src.


Oops it’s header only lib.


Immediate mode GUI is a very nifty concept. Kind of like learning about fully persistent data structures for the first time. But I wonder, does it scale?


Does anyone care to explain what immediate mode means?


Immediate mode is a style of API where important state is kept in user code instead of being retained inside the API implementation. For example, an immediate mode GUI checkbox implementation does not store a boolean value determining whether the checkbox is checked. Instead, user code passes that information as a function parameter whenever the UI needs to be drawn. Even the fact that a checkbox exists on screen is not stored in the GUI library. The checkbox is simply drawn when user code requests it each frame.

Couterintuitively this is often less complicated than the traditional "retained mode" style of GUI libraries because there is no duplication of state. That means no setup or teardown of widget object trees, no syncing of state between GUI objects and application code, no hooking up or removing event handlers, no "data binding", etc. The structure and function of your UI is naturally expressed in the code of the functions that draw it, instead of in ephemeral and opaque object trees that only exist in RAM after they're constructed at runtime. You retain control over the event loop and you define the order in which everything happens rather than receiving callbacks in some uncertain order from someone else's event dispatching code.

Crucially, "immediate mode" is a statement about the interface of the library, not the internals. Common objections of the form "immediate mode GUIs can't support X" or "immediate mode GUIs are inefficient because Y" are generally false and based on a misconception that immediate mode GUIs are forbidden from retaining any internal state at all. It is perfectly normal for an immediate mode library to retain various types of internal state for efficiency or other reasons, and this is fine as long as the state stored in user code remains the source of truth. This can even go as far as internally constructing a whole retained widget tree and maintaining it via React-like tree diffing.


I picked your write-up on that unstructured attempt of making a page of definitions: https://github.com/ocornut/imgui/wiki/About-the-IMGUI-paradi... Thanks!


Cool! Feel free to use it and modify it however you want if that would be useful, no attribution required.


Jetpack Compose on Android has an API exactly like you describe, but I've never heard it described as an immediate-mode GUI. Is it one, or is there something else that makes a GUI toolkit immediate-mode?


Jetpack Compose is very interesting, but I don't think it's exactly what I described. It explicitly retains a lot of state including a widget tree and event handlers. Although the state is opaque because there is no DOM-like API to inspect it, the user is still required to reason about the retained state and know exactly when and which parts should be invalidated. It provides a lot of "convenience" features for the user to annotate and wrap state which is expected to change so that invalidation can happen semi-automatically. But that results in a lot of rules you must follow which can't be automatically checked or enforced, which ends up being rather complex I think: https://developer.android.com/jetpack/compose/state and https://developer.android.com/jetpack/compose/side-effects

Here's a checkbox in Jetpack Compose with the required state annotation and event handling stuff, which if forgotten or done incorrectly will make the checkbox silently malfunction:

    val checkedState = remember { mutableStateOf(true) }
    Checkbox(
        checked = checkedState.value,
        onCheckedChange = { checkedState.value = it }
    )
And here's a checkbox in Dear Imgui which just uses a plain bool and no event handlers (also note that this includes a clickable text label which would require additional event handlers with Compose):

    static bool foo = true; // could be any bool in the application, nothing special about this one
    ImGui::Checkbox("Foo enabled", &foo)
Here's a great example of how the retained state in Jetpack Compose can confuse people: https://stackoverflow.com/questions/70040541/jetpack-compose...


sounds like html over the wire in the web world. State only resides on the backend. I think it's a good direction.


The best explanation is probably the one by Casey Muratori (https://caseymuratori.com/blog_0001). He devised the technique and conied the term in 2002.


Can't imagine he was the first person to use this technique. I'd assume any game written from the point where it was reasonable to continually redraw the screen instead of using incremental repainting would've used such a technique, because it's simply the easiest way to implement random ad-hoc GUIs if you don't have a widget toolkit underneath you (and possibly even if you do).


Probably. I was using a similar technique in the late 80s in turbo pascal. Casey has the merit to have given it a name and make a video about it.


See https://en.wikipedia.org/wiki/Immediate_mode_(computer_graph... , and https://en.wikipedia.org/wiki/Immediate_mode_GUI .

TLDR: instead of declaring your GUI objects/components ahead of time, and letting a framework render the prepared scene graph (while calling you back when there are user triggered state changes), you just draw the GUI objects yourself in your main loop, just like you draw all your other objects (backgrounds, sprites, models etc).


Can this be used with the Linux frame buffer device on embedded systems without a window manager?


Looks like metal gear solid. Nice.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: