Replies: 34 comments 8 replies
-
To add some reasoning as to why it would be ideal for writes to be atomic, consider how it would be if EcmasScript had a dependency-tracking reactivity feature including a signal count@ = 0
batch {
count@ = 123
console.log(count@) // 0, confusion
}
effect {
console.log(count@)
} Or if ES ever gains decorators for variable declarations, and we implement reactivity as accessors, consider how the modified semantics of the existing language feature would cause confusion: @signal
let count = 0
batch(() => {
count = 123
console.log(count) // 0, expected 123
}) In my particular case, I have a signal-backed accessors on objects, and this was confusing: obj.count = 0
batch(() => {
obj.count = 123
console.log(obj.count) // 0
}) |
Beta Was this translation helpful? Give feedback.
-
FWIW, the docs (at least now) correctly reflect this:
I think I've heard Ryan talk about changing this behavior, by revealing the "current" signal value even when it hasn't propagated into derived memos etc. But at some level your proposal is at odds with [signal, setSignal] = createSignal(0);
double = createMemo(() => 2 * signal());
batch(() => {
setSignal(1);
console.log(signal()); // currently 0; you're proposing 1
console.log(double()); // must be 0
}); For On the other hand, I agree that the current behavior makes it hard to write code (especially library code), because you generally don't know whether you're in a batch. But just fixing signals that are set directly won't fix the problem in general; memos won't have updated. Though it's maybe more intuitive that memos "take time" to update... Another idea I have: in dev mode, we could issue a warning if the user reads |
Beta Was this translation helpful? Give feedback.
-
Yeah this one is very much intentional and is important to keep consistency, ie be glitchfree. And comes into play with async consistency as well. I have wondered if there are other options but the repercussions would be wide spread and impacting and the current behavior gives very important guarantees. |
Beta Was this translation helpful? Give feedback.
-
Do you think the dev-mode warning ("reading from signal that has pending changes") would be useful, or are there legit uses that might be annoying? |
Beta Was this translation helpful? Give feedback.
-
I'm ok with that I think. Hmm... In Marko 6 right now we throw an error in this scenario. In React they just assume it's always in the past, and in React 18 they batch everywhere now so it's consistent. Hmm.. Yeah maybe this could promote better behavior. The one place it gets weird I suppose is if an Effect sets a value and then before it gets resolved other effects read from it. My concern is that this warning ends up just being a red herring. |
Beta Was this translation helpful? Give feedback.
-
I'm not sure I understand how that can happen, but I believe you that it can. :-) Perhaps the warning could be restricted to writing and reading within the same computation, which seems likely a bug. But I'm not sure how much overhead that would incur... (only in dev mode, but potentially an annoying amount of tracking to do) |
Beta Was this translation helpful? Give feedback.
-
What if, when in a batch, the memo getter is simply executed just as if it were a plain function? I would assume that, being in a batch, I'm not in an effect (at least that's how I think of code design working out), so just give me (calculate) the latest value.
I believe always-deferred effects (not just the first time) are the mechanism for mostly glitch-free reactivity (the "batched everywhere" idea mentioned above). The widely adopted native framework, Qt, defers all computations (I can't find the article at the moment, but it was a breaking change in a major version release), and so does Knockout. In any case, it is possible to make a primitive like I will use this concept in LUME for all effects exclusively and report back on how it works out. I'm already imagining the benefits: el.rotation.x = 10 // one effect
el.rotation.y = 20 // another effect, etc.
// and I don't want end users to have to manually `batch` everything, it's not as good of a DX
That's totally fine. In one solution, another microtask task is scheduled for the future, and the effect will run and see the new value. Another way to solve this is just allow values to be read from always, and only worry about effect scheduling as the performance mechanism (i.e. memos always re-evaluate if needed, lazily). Successive effects in a microtask will read the latest state, which effectively also treats them like sub tasks of the microtask they're in compared to the other solution. The only time deferring is needed is in the original macrotask. So essentially one effect is like a Or in other words: if a variable changed, and an effect that depends on it is coming up ahead, there's no need to defer, just let the effect read the current value (with lazy evaluation for memos). TLDR pseudo idea:
This would be quite a big change to Solid though, especially after 1.0. For end users, it would be breaking for |
Beta Was this translation helpful? Give feedback.
-
Here's the Discord conversation: https://discord.com/channels/722131463138705510/780502110772658196/942598067444678727 And here's what we landed on for testing the idea implemented outside of core: function createBatchedEffect(fn) {
let initial = true
let deferred = false
createComputed(() => {
if (initial) {
initial = false
fn()
return
}
if (deferred) return
deferred = true
queueMicrotask(() => createBatchedEffect(fn))
})
} It always queues a microtask, because as far as I know, only Solid core would have the ability to iterate on effects (faster) instead of queuing new microtasks for each one (slower). Maybe we would want it to be deferred on the first run too? Right now it runs immediately the first time. Will play with it. |
Beta Was this translation helpful? Give feedback.
-
Here's a playground example: https://playground.solidjs.com/?hash=-518047525&version=1.3.9 Interesting, I didn't know regular effects were "deferred" relative to a component (i.e. they fire after the JSX effects, despite that the JSX effects are defined later in source order). We can see the regular effect runs twice, one for each set, the first set shows a reactive glitch. The batched effect runs once after setting both signals, after everything else, no glitch as we'd expect. |
Beta Was this translation helpful? Give feedback.
-
To be clear that isn't the definition of a glitch. Having a setter run to completion twice is consistent. Having it observable at any point that a derivation doesn't reflect its source signal is a glitch. Basically if in this example at any point you can see not all 3 logs being identical then it isn't glitch-free: https://playground.solidjs.com/?hash=-1355040064&version=1.3.9 Not all reactive systems are but it is something we value. |
Beta Was this translation helpful? Give feedback.
-
According to who? Depends on the point of reference. Here is one definition:
Here's the example from Wikipedia demonstrated in Solid, which I'd consider as having an inconsistent temporary state as described by both of those sources: https://playground.solidjs.com/?hash=-283178106&version=1.3.9 You mentioned that using https://playground.solidjs.com/?hash=-140464327&version=1.3.9 I'd still say the first example counts as a glitch (be it from incorrect but accidentally-easy usage of Solid), and the reason is due to Solid's current effects running for every signal modification instead of batched. Here's the first example using a custom batched/deferred version of https://playground.solidjs.com/?hash=-1105853944&version=1.3.9 The code here for reference.import { render } from "solid-js/web";
import { createSignal, createComputed } from "solid-js";
function Counter() {
const [seconds, setSeconds] = createSignal(0)
setInterval(() => setSeconds(s => s+1), 1000)
const [t, setT] = createSignal(0)
const [g, setG] = createSignal(false)
// t = seconds + 1
// g = (t > seconds)
createEffect(() => {
setT(seconds() + 1)
})
createEffect(() => {
setG(t() > seconds())
})
createEffect(() => {
console.log('t:', t())
console.log('t > seconds?', g())
})
return (
<h1>see console</h1>
);
}
render(() => <Counter />, document.getElementById("app"));
function createEffect(fn) {
let initial = true
let deferred = false
createComputed(() => {
if (initial) {
initial = false
fn()
return
}
if (deferred) return
deferred = true
queueMicrotask(() => createEffect(fn))
})
} Implementing it in core would be more efficient of course. I believe this is a lot better than having a
Now we're left with one point of cognitive load, simplified:
Maybe much more rare cases that most people may not ever need to care about will require |
Beta Was this translation helpful? Give feedback.
-
You are correct that Effects being batched leads to a behavior where they apply the changes in groups. To be clear I consider that implementation not equivalent to the Wikipedia example. They are describing derivations not effects. The choice of scheduling to affect the outside world. But look at it this way. I moved the console.log in your original example: In any case removing batch on effects call also makes it go away without microtask and since it is scheduled after it has the same stabilization. So why did I add batching to effects you ask? Transitions. A Transition must be batched. And if we start one during an effect queue it entangles. Mind you I did start queuing Transitions so may be worth revisiting. I'll give this a look. |
Beta Was this translation helpful? Give feedback.
-
That article was meant to be generic, and does not really care if we're using observables, dep-tracking effects, or even just event emitters. I think you're constraining the definition by thinking about it too specifically. In Solid.js, "derivations" can be performed via effects, which is what my examples did, so the concept in the Wkipedia example applies (just under different terms, they had to use some terminology to even describe anything).
But that's also an effect, with some added caching and a signal. That's another way to avoid glitches. I knew that, but I intended to show the issue only with effects and signals (terms the Wikipedia article is not using, but nonetheless one way to achieve what it describes). Even with the batched/deferred effects, memos would still be a useful alternative.
That's yet another way.
Outside of components too? Circling back to the original but related topic, I think all signals should simply hold their last set value, irrespective of effect scheduling or batching. And, I'd say that someone triggering signal-dependent code inside a batch, side-stepping effect scheduling, should consider that bad practice, but I think it'll be a rare case. Plus memo's should re-evaluate if they need to, lazily when called (and even the next effect can skip evaluation since the memo evaluated already, and only trigger dependents). With that set of changes, at least there will be no surprises in signal values, and we have a scratch pad. |
Beta Was this translation helpful? Give feedback.
-
The problem is that signals don't have dependencies so we can't ensure things only run once synchronously (important for glitch-free). So strictly if you see a reactive statement with an equal sign it is a Effects are showing something inconsistent which isn't great. I had my reasons, a desire to finish change sets. As without batch it can enter the pure execution part mid queue execution. But for consistency, its probably fine, but with computed/effects can never guarantee not running twice anyway. Combined with other scheduling batching also can eliminate infinite loops easier.. where this will just keep going. In any case Signals still shouldn't show the last value in a batch for consistency reasons I have stated before. But at least we can avoid opting people in unintentionally. Although it is fair to point in this special execution zone losing consistency may be ok. |
Beta Was this translation helpful? Give feedback.
-
I'm not sure what you mean. Can you make an example using the version of It would be really helpful if you can show working/broken examples; a picture painted through code really helps me see. |
Beta Was this translation helpful? Give feedback.
-
Most of this is motivated by correctness rather than intuitiveness because I have to safeguard other potential features. That's the main reason why I'm very careful on the batching behavior in terms of keeping the value in the past. The current state of Effects is a different thing and is intentional. Not that it might seem intuitive it works the same clocks work in S.js. As weird as it might seem it isn't strictly inconsistent. It finishes all updates until completion until applying the next change. In fact, in S.js all computations work this way. I did change the behavior in Solid to be more like MobX because of the confusion there. But around the time I added concurrent transitions I switched it back just for Effects to keep things isolated. That is no longer necessary though so I look forward to restoring that.
While not strictly that version As I said it just over-executes slightly. It isn't typically a big deal. But just the sort of things that memo's knowledge of the graph helps prevent. Which is part of why I ultimately want to get rid of |
Beta Was this translation helpful? Give feedback.
-
I hear you, but I don't think the S.js technique is the only way to implement reactivity while still having performance. All that you just described, would not be necessary with always-batched effects, and would not have values-in-the-past. Win win. But the thing is, it would break just about everything. All APIs built on top of I will make a concept once I get a chance. |
Beta Was this translation helpful? Give feedback.
-
Regardless of any changes to Solid, this would be great to highly encourage in a best practices section. Always memos for derived values, and erasing the prose around |
Beta Was this translation helpful? Give feedback.
-
Yeah I want to at least know that this will happen. I've been exploring a lot of things, but like when I introduced it, I still feel there are cases that it is needed. When I know with more certainty I'd definitely move to deprecated. What is clear is that there are tradeoffs. Like maybe any cases I can't eliminate from createComputed do need to move to createEffect. I'm not happy about that though so I want to see what the options are. |
Beta Was this translation helpful? Give feedback.
-
I made some concepts. Keep in mind these examples are essentially reactivity built on top of reactivity, so performance is off the table, they only serve to show the conceptual behavior: Here is your first example modified slightly in order to show how it executes: https://playground.solidjs.com/?hash=-1825045236&version=1.3.9 Note the output at the end is Here is https://playground.solidjs.com/?hash=802237130&version=1.3.13 Note that the final output is The following demo works the same, with the same https://playground.solidjs.com/?hash=-358673099&version=1.3.13 Those examples are a bit small compared to real-world scenarios in which there will be a higher number of effects, plus more dependencies used within effects. With that in mind, the following example re-creates dependency tracking for the purpose of being able to move not-yet-executed effects to the end of the queue if any effects prior to them in the queue write to any of the dependencies of the not-yet-executed effects. In the previous examples, an effect is moved to the end of the queue only once upon the its first dependency change (not exactly what we want), whereas in the following example any dependency change moves a not-yet-ran effect to the end of the queue. However, the example is not complex enough to show any difference, and the output is still https://playground.solidjs.com/?hash=1889548591&version=1.3.13 I need to make a more complex scenario to show how it will eliminate even more unnecessary effect run, but I think you may already be able to picture it. These examples didn't how glitches are eliminated, but basically the effects you saw that were eliminated are the ones where glitches would have happened. |
Beta Was this translation helpful? Give feedback.
-
The execution of reducing extraneous running can happen without microtask queueing. Right now they are there intentionally because of the desire to apply all the changes in steps. But as I mentioned it is probably safe to removing the batching and not have that behavior. And that basically resolves everything that caused this issue being reported in the first case. I need to verify a couple of things but I believe there is a synchronous solution here that doesn't introduce your definition of glitches. So to me the microtask queing is a separate issue and whether we want to introduce async. This has impact on things like performance even without causing any extra execution. My gut is to avoid this simply from inconsistent observability, ie my definition of glitches. We've looked at more advanced approaches with Marko here as well including breaking chained updates over frames etc essentially turning an infinite loop into an animation but even then we don't initially schedule effects async. Part of my hesistance might be coming from simply microtask queing initial effects has a negative performance on benchmarks. I've tried it. And since the goal is to not apply effects in batches but have it just free for all resolve I see no benefit of introducing microtasks beyond that. Like it would be different if we were like apply each batch after the first later. Ie. first batch is synchronous, next batch is basically setImmediate. But that has todays behavior of batches. If we want consistency we have to apply the complete changeset and if that is the world of all possible changes that happen throughout the process well then further scheduling is off the table (from my definition of glitchfree). |
Beta Was this translation helpful? Give feedback.
-
I happened to rename it before I saw the reply. I don't see how it is possible for effects to be batched without a microtask and without the user having to explicitly opt into batching, apart from the user having to use an API like Another thing is end users by default aren't encouraged to use As far as benchmarks, I shall try to see what I can do. My hunch is that the cost of a single microtask is only notable for small cases, and js-framework-benchmark is nice but also not the most representative of all app use cases, and that the cost of a microtask will be negligible the bigger an app gets, especially if it has many moving parts like an animated graphical scene; similar to my theory that the cost of WebGL DOM bindings for WebAssembly will be negligible compared to the savings for running a bunch physics and matrix math in Wasm with help from SIMD, or similar. My goal is to get some measurements in place. As for glitches, or inconsistent state, I imagine it leading to some perform pitfalls: f.e. if an effect is temporarily showing a What we need is more examples of more scenarios. The ones in my previous comment were just simple starters. I'll work on making a collection of them so we can better understand the implications. React 18 just came out, and is moving to default batching for all state changes |
Beta Was this translation helpful? Give feedback.
-
To be fair React already did this in 90% of places they just finished the story. And again they stay consistent by showing all values in the past. If it isn't clear I think batching should work the same way as today. Effects just don't need to be necessarily. We already schedule them into their own queue that runs synchronously at the end of changes. |
Beta Was this translation helpful? Give feedback.
-
Values-in-the-past are one of the absolute worst things I dislike about React. They are a bad developer experience. I've had to deal with it a many times at work, and just don't like them. Coupling the render cycle to the observability of state values leads to people writing brittle code, especially if they don't come from a strong React background, as I've seen in real-world React projects. Things should just be as intuitive as possible. I don't believe values-in-the-past are required for good performance (proof needed).
That only works within Solid's framework, not outside of it. Changes can happen outside of effects, for example in a Here are a few more examples (with more to come). The first one shows an issue, in that the scene re-loads during the calculation of a variable's value every second, resulting in an unexpected and poor performing behavior (try to scroll to zoom): https://playground.solidjs.com/?hash=1025549064&version=1.3.13 As the comment in there implies, we can solve it by using https://playground.solidjs.com/?hash=-394158961&version=1.3.13 Now here it is solved without having to think about https://playground.solidjs.com/?hash=-54762955&version=1.3.13 In these examples, I purposefully made it clear what the problem is, because the scene takes some time to load so the issue is very visually apparent. However, in many real-world scenarios, the issue will not be visible, and will go unnoticed until the performance is bad enough. In the following example, the same problem exists as in the first example, but we can't see it: https://playground.solidjs.com/?hash=-32468437&version=1.3.13 Actually we can see the issue if we try to select the "Page 1" text: it will undo our selection. It is very possible for these sorts of issues to go unnoticed, especially in cases where selection or user interaction doesn't change the state of visuals, but still re-creates the DOM. Deferred/batched effects so far,
Upcoming examples will work on showing improvements in examples that have more effects and more signals per effect; examples that are more representative of real applications. |
Beta Was this translation helpful? Give feedback.
-
No new examples yet, but some fixes and better types: https://playground.solidjs.com/?hash=-375386698&version=1.3.13
Of course this is totally not ideal the way it is implemented, being essentially a hack on top of Solid, and it only works with the special |
Beta Was this translation helpful? Give feedback.
-
I'm moving this into discussions because that is what this has become and trying to follow this a bit hard. There are 3 things being discussed from my perspective.
All these can be talked about independently. While there is a lot of discussion about implementation I want to answer these questions because I think there is too much conflation between different parts. My take is:
What I'm hearing from this thread is the desire to only batch effects and update everything else immediately. It is a different meaning for batching. To me batching is a guard against expensive computations as well and a mechanism that only batched effects would not protect us here.
None of these are actually easy to answer and they are very fundamental to the reactive system. So discussion here is worth having. But I'd start here before worrying about implementation. |
Beta Was this translation helpful? Give feedback.
-
The reason I'm trying to separate it is, we don't need microtask to run the effects later. We already have the means for that synchronously with our 2 queue approach. We already do this with the exception of event handlers/async callbacks and could handle those even without microtask perhaps, but that's sort of besides the point. The purpose of batching is to apply multiple signal changes without executing computations downstream until they all apply. Like if you have something that runs an expensive computation when a or b changes: const [a, setA] = createSignal("a");
const [b, setB] = createSignal("b");
const c = createMemo(() => expensive(a(), b())); //expensive
const d = createMemo(() => a() + b()); //cheap
createEffect(() => render(c()))
setA("A") // expensive/cheap runs once
setB("B") // expensive/cheap runs twice
batch(() => {
setA("A")
setB("B")
}) // expensive/cheap runs once The whole problem comes inside the batch: batch(() => {
setA("A")
d(); // what is the value here
refToDOMC //what does the rendered DOM look like?
setB("B")
}) |
Beta Was this translation helpful? Give feedback.
-
Discussion on timing phases and |
Beta Was this translation helpful? Give feedback.
-
I just want to re-iterate that deferring with a microtask is required for batching to work when signals have their values set from plain JavaScript outside of Solid's control. If Solid schedules effects relative only to Solid's own synchronous "event loop", then only code within Solid's control will have the benefits that you describe. This is the problem (a very common one): // This is a file written by a library author.
const [foo, setFoo] = createSignal(0)
const [bar, setBar] = createSignal(0)
createEffect(() => {
console.log(foo(), bar())
})
// Absolutely any other code anywhere in a JavaScript app, possibly
// unrelated to Solid.js whatsoever, can import these, and can call them:
export {setFoo, setBar} This example will show inconsistent state. There is no way that library authors are going to be able to enforce that all users always use Live Playground showing inconsistent state The goal you proposed @ryansolid is only a partial solution, because it solves the issue specifically relative to code that is under the control of Solid.js (for example Solid components and effects). The goal does not solve the issue for code unmanaged by Solid.js, making integrations with other systems always require explicitly calling Microtasks are a nice way to force everyone to "batch" by default. This is the standard that DOM reactive APIs (f.e. MutationObserver, ResizeObserver, etc) have already adopted, and for good reason. |
Beta Was this translation helpful? Give feedback.
-
I came up with another way to defer effects, and its actually way simpler than trying to control the effects. Instead, we defer Solid's whole reactivity system to the next microtask wrapped in https://playground.solidjs.com/anonymous/27f29d3d-00f4-4175-93c2-90114a868aa5 Its beautiful. I'm going to heavily test this out. |
Beta Was this translation helpful? Give feedback.
-
Describe the bug
Your Example Website or App
https://playground.solidjs.com/?hash=-937748346&version=1.3.9
Steps to Reproduce the Bug or Issue
See link
Expected behavior
Reading and writing signals should be atomic operations.
Screenshots or Videos
No response
Platform
n/a
Additional context
I spent a weekend debugging an issue I thought was in LUME, because I never expected reading a signal after setting it would return an old value. Essentially the write is not atomic.
The issue in my case also wasn't obvious because the read was far removed (several methods deep) from where the write happens.
The reason I wanted to use
batch
was to group the write of a signal with some method calls after it, so that reactivity would be triggered after the write and subsequent method calls.Beta Was this translation helpful? Give feedback.
All reactions