Recent Forum Posts
From categories:
page 1123...next »
Nathaniel J. Smith (guest) 20 Oct 2018 20:16
in discussion Hidden / Per page discussions » Update on Structured Concurrency

Think about a client attempting to perform a non-idempotent HTTP request. From the client's perspective, there are three possible outcomes: clean success, clean failure, or an ambiguous state where it doesn't know whether it failed or not. Clean failure is fine – you just retry. But if you're in the ambiguous state, you're screwed, because you don't know whether you can safely retry or not.

The point of stopping accepting incoming requests is to minimize the number of requests that end up in the ambiguous third state, and convert them into clean failures instead. In fact, if you know that all requests finish in less than N seconds, and you set an N second timeout, then that gives you a guarantee that *no* requests will end up in the ambiguous state unless there's some other independent failure (network partition, power loss, etc.). If you keep accepting new requests during the grace period, then there's no guarantee at all.

It's also nice operationally because it lets you safely use a large grace period – like a minute or whatever – just in case there's some really long-running request, while knowing that in 99% of cases it will terminate in a second or two. If you keep accepting new requests during the grace period, then a busy server will always use the whole grace period, and "graceful" shutdown becomes essentially equivalent to a hard shutdown.

by Nathaniel J. Smith (guest), 20 Oct 2018 20:16

Kotlin coroutines have multi-core support. It is obviously complex under the hood (just like the OS kernels, as you correctly note), but the users are not exposed to that complexity. It all just works for them.

Cancellation is definitely easier for coroutines than for threads, since it is fully cooperative. It only works with async APIs, of course, so we are gradually expanding the set of non-blocking APIs available to the users by providing them with the corresponding non-blocking libraries.

When coroutine is cancelled in Kotlin it immediately and synchronously notifies all its children about cancellation, and then waits for all of them to complete, which is a good default when you what to terminate something as quickly as possible.

To perform a graceful shutdown we use the following pattern. Let us take a web server, for example. We keep its acceptor coroutine in a separate scope from all the connection coroutines. To shut it down gracefully we cancel its acceptor coroutine in a normal (non-graceful) way and then wait for the scope with connections to for a given time, giving it chance to finish serving user requests. If time's up, then the connection's scope gets cancelled, too. It gets more complex with HTTP pipelining and HTTP2.0, so there is some non-trivial code in our Ktor framework to handle that. However, it does not seem to be needed for general users of coroutines, so we are currently reluctant to add some direct support in our API for that kind of multi-stage shutdown.

by Roman  ElizarovRoman Elizarov, 20 Oct 2018 15:11

We'we seen a similar problem when switching to structured programming. People complained that they can't do the hyperoptimization using gotos, they've complained about the cost of maintaining the call stack and so on. But then the whole thing kind of petered out.

I think we are facing a similar (non-)problem here. For example, the cost of yield() in libdill is, approximately, 20 nanoseconds. Call it once a second and you'll get the perfomance hit of 0.000002%.

by martin_sustrikmartin_sustrik, 20 Oct 2018 11:30
glaebhoerl (guest) 20 Oct 2018 11:23
in discussion Hidden / Per page discussions » Update on Structured Concurrency

And the *really hard* problem is making that not have a performance penalty, which people doing heavy number-crunching are often unwilling to accept…

(The Go folks have some intriguing work on (IIRC) using the same kind of mechanism for "zero-cost preemption" as for "zero-cost exception handling"; meanwhile other smart people allege that this is better at hiding costs than eliminating them; and I'm not qualified to judge.)

by glaebhoerl (guest), 20 Oct 2018 11:23

Why would you want to do that? Say you are shutting down a HTTP server. You give it 1 second grace period. Does it make any difference whether, within that interval, it just processes fully received requests or whether it finishes reading half-read requests and processes them? It's still 1 second. Who cares?

by martin_sustrikmartin_sustrik, 20 Oct 2018 07:45
Matthias Urlichs (guest) 20 Oct 2018 07:28
in discussion Hidden / Per page discussions » Update on Structured Concurrency

It's not just ordered cancellation. Consider a HTML1.1 connection. You want to teach it to throw away incomplete requests but to return in-progress responses. So your HTML 1.1 handler would enable soft-cancel while reading from the socket, disable them while generating and sending the reply, repeat.

Granted that in this case you could simply call shutdown(RDR) on the socket, but that won't work on HTML2.

by Matthias Urlichs (guest), 20 Oct 2018 07:28

The real hard problem, rather than technical, I guess, would be to teach multi-threaded programmers to use yield() when doing heavy number-crunching so that their threads can be cleanly canceled.

by martin_sustrikmartin_sustrik, 20 Oct 2018 06:31

Our tentative idea is to add a "soft cancelled" state to our cancel scopes. Currently, when a cancel scope enters the cancelled state, that's delivered to any blocking operations by default, but they can explicitly opt-out (using "shielding"). A soft cancel state would *not* be delivered by default, except for operations that explicitly opt-in. So then for a conventional graceful shutdown you'd do a soft cancel + set a hard deadline, and make sure that accept loops and similar all opt-in to soft cancellation.

Isn't that just ordered cancellation (see above) in disguise? If you had one nursery with the thread that accepts connections and another nursery with connections themselves, then you can cancel the former first, the latter second.

And on popular OSes the standard blocking APIs have terrible support for cancellation.

In theory, the support is there. You can use pthread_kill() to send signal to a thread and if the thread is stuck inside a blocking function, the function should return with EINTR error. Except that I have no confidence in this working properly. But, on the other hand, if standard lib folks actually tried to make it work, then I can see a way forward. (Btw, you can do the same with processes, just use kill() instead of pthread_kill()).

by martin_sustrikmartin_sustrik, 20 Oct 2018 06:26
Michael South (guest) 20 Oct 2018 03:29
in discussion Hidden / Per page discussions » A really hard problem

If grandma is a physicist, tell her that the vaguely recognizable code bits are the excitors, and the other code is the lattice for the computational phonons.

by Michael South (guest), 20 Oct 2018 03:29
Nathaniel J. Smith (guest) 20 Oct 2018 02:29
in discussion Hidden / Per page discussions » Update on Structured Concurrency

Oh, and a minor point regarding your "go_process" suggestion: We're using a slightly different API pattern in Trio. Instead of a "start in new [task/thread/process]" primitive, we have a "start in new task" primitive, that we compose with a "switch from task to thread" primitive, and hopefully a "switch from task to process" primitive in the future. This is nice because like you say, threads and processes have their own complexities, and this way we don't have to bake a particular set of choices into the nursery primitive. People can even build their own run-in-process APIs as third-party libraries.

Of course, this is in the context of a Python library, where the GIL makes it a no-brainer to do everything async within a single thread by default. In another language it would make sense to schedule tasks across multiple cores by default.

by Nathaniel J. Smith (guest), 20 Oct 2018 02:29
Nathaniel J. Smith (guest) 20 Oct 2018 01:25
in discussion Hidden / Per page discussions » Update on Structured Concurrency

Regarding grace periods: yeah, this is something we're still trying to figure out how to handle nicely in Trio. (Discussion thread.)

Your challenge with combining the one hour deadline and the one minute deadline is actually easy in Trio: cancel scopes can be nested, and you can change cancel scope deadlines on the fly. So each connection handler will have its own cancel scope with a one hour timeout, and for the grace period, you wrap your whole program in a cancel scope, and when you want to shut down, set that shared scope's deadline to 1 minute. Trio will take care of composing those deadlines together.

But in real life, that's not enough, because during the grace period you usually want to expedite things, e.g. by letting in-progress requests finish, but not accepting any new incoming requests. So you need some way to communicate to everyone to stop accepting new requests, including to tasks that are e.g. blocked in accept().

Our tentative idea is to add a "soft cancelled" state to our cancel scopes. Currently, when a cancel scope enters the cancelled state, that's delivered to any blocking operations by default, but they can explicitly opt-out (using "shielding"). A soft cancel state would *not* be delivered by default, except for operations that explicitly opt-in. So then for a conventional graceful shutdown you'd do a soft cancel + set a hard deadline, and make sure that accept loops and similar all opt-in to soft cancellation.

Regarding multi-core support: Yeah, there's definitely no reason why you couldn't combine structured concurrency with a multi-core scheduler. The one big problem is that you really want solid cancellation support. (Of course you want this anyway, but if you're trying to be structured then you're sort of forced to deal with it.) And on popular OSes the standard blocking APIs have terrible support for cancellation. So, even if you're using a multi-core scheduler, you can't use traditional blocking libraries; you still have to build a whole new set of I/O primitives with cancellation support, and then rewrite all your libraries to use them. This is a lot easier to justify if you're writing an async library, since for unrelated reasons, async libraries *also* need to build a new set of I/O primitives and rewrite everything to use them, so you can sneak the cancellation support in at the same time.

I don't think there's any way to retrofit cancellation support into classic blocking APIs, and even if you could you wouldn't want to – you can't expect existing libraries to handle it correctly. OTOH there's no reason you couldn't implement a new set of APIs that act like the classic blocking synchronous ones, but that support cancellation. (I discuss this a little in my cancellation post.)

Golang even demonstrates that it's possible to provide a conventional blocking synchronous I/O API where all the blocking operations are integrated with a user-space scheduler, using existing OS APIs under the hood. But unfortunately, they didn't bake in cancellation when they were doing that. It's too bad – a real missed opportunity :-(.

by Nathaniel J. Smith (guest), 20 Oct 2018 01:25
Marcel (guest) 12 Oct 2018 09:16
in discussion Hidden / Per page discussions » What Can Philosophers Learn from Programmers?

No, I do not have any pointers to the literature. My guess would be that the best sources are "how to write math"-type of blog entries and handouts, as it is more a matter of craftmanship than that of formal mathematics, comparable to writing "good code", instead of only writing "correct code". In Germany there is this great and short book by Beutelspacher, "Das ist o.B.d.A. trivial" (translates to "that is WLOG trivial"), dealing a lot with common pitfalls when making definitions, and how to use the mathematical language. But it does not touch the content of the mathematics (i.e. the question whether a definition will turn out useful). But it postulates the primacy of clarity. So whenever you make a definition, it should increase the clarity of your reasoning. For your everyday math, this is pretty much the most important rule.

I think that is actually one of the points where mathematicians could learn a lot from programmers, as among programmers there is much more awareness about the importance of good style than among mathematicians.

by Marcel (guest), 12 Oct 2018 09:16

Do you have any pointers to the literature? I always suspected that the problem of crafting definitions is more like engineering than maths and that's why it's not addressed by mathematicians. But maybe I just failed to find the relevant research.

by martin_sustrikmartin_sustrik, 12 Oct 2018 08:07
Kevin (guest) 11 Oct 2018 23:45
in discussion Hidden / Per page discussions » Anti-social Punishment

This is great post. Really fascinating to see.

Adjacent to the paper, but one thing that really struck me, was how large an impact a small feedback mechanism had

by Kevin (guest), 11 Oct 2018 23:45
Marcel (guest) 11 Oct 2018 09:54
in discussion Hidden / Per page discussions » What Can Philosophers Learn from Programmers?

Usually in math people need to motivate new definitions. We learned actually quite early that a definition ideally comes with an example and a counter example, hopefully convincing you that it is a good definition.

You would not define "grue", unless you found an object with this property or have a strong suspicion that something like this could exist (say your some moldy cheese in your fridge) or should not exist, so you can talk about it, infer further properties from the definition. Sometimes it takes a while to see whether a definition makes sense, or does not. Often it is really just an abbreviation. And this already solves the somewhat stupid riddle of induction, as we have many reasons for the use of the words green and blue, but except for that moldy cheese in the attic, no reason to come up with words like bleen.

Aside the idiotic inductive reasoning philosophers actually have been observed to do, after time t the definitions of bleen and grue become in some sense trivial, since at that point all objects with these properties have been or are in existence, and it is just a name for some collection of things, which probably mainly consists of doodads created by philosophy students, which change their color exactly at time t, while green and blue remain quite useful concepts.

by Marcel (guest), 11 Oct 2018 09:54

What has been will be again, what has been done will be done again; there is nothing new under the sun. :)

by martin_sustrikmartin_sustrik, 10 Oct 2018 17:29
by Peter (guest), 10 Oct 2018 16:25

I wasn't trying to say anything about Feynman. Does it sound so?

The quote comes almost verbatim from Wikipedia which cites Feynman's Nobel lecture as a source.

Zarathustra, in turn, is a reference to Nietzche's "So spoke Zarathustra" which features the theme of eternal return, although in a somewhat different sense than the above.

by martin_sustrikmartin_sustrik, 10 Oct 2018 04:59
anon (guest) 09 Oct 2018 22:11
in discussion Hidden / Per page discussions » One-person Universe

But Martin:

Feynman was not naive: he was a well-known prankster after all. His memoirs are illustrious but I wouldn't vouch 100% for their accuracy, even w.r.t. his advisor, Wheeler, who was only 7 years his senior.
Your quote suggests that the making of Dr Feynman was (partially) due to his meeting some very original thinkers (none of whom was Zarathustra).

by anon (guest), 09 Oct 2018 22:11
Apostolis (guest) 04 Oct 2018 09:22
in discussion Hidden / Per page discussions » Anti-social Punishment

Well, it works as you describe it. People do not consider the laws to be just and most of us do not follow them. The same behavior is also apparent in the Arab world. We have this in common with the Arabs, a belief of mine that is supported by the data.

By the way, here is a paper that just came into my twitter feed that proposes a reward system instead of punishment.

https://twitter.com/alexvespi/status/1047409750319796225

by Apostolis (guest), 04 Oct 2018 09:22
page 1123...next »
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License