Recent Forum Posts
From categories:
page »

Have to say, you have a great bias against China.

by RocWay (guest), 14 Dec 2016 08:33
Daniel Tracy (guest) 13 Dec 2016 03:14
in discussion Hidden / Per page discussions » Why should I have written ZeroMQ in C, not C++ (part I)

Martin Sústrik,

I really enjoyed your May 2012 post "Why I should have written ZeroMQ in C, not C++ (part I)".

I agree with your assessment of C++ exceptions. Many commentors don't quite get what you did not explicitly state: a far-away error-handling method designed to handle multiple exception types for convenience is unlikely to be able to handle errors in a fully recoverable way. Exceptions are designed to decouple error handling away from the immediate caller, which is the only context that truly understands the API and the state it is dealing with. An exception is a goto to a (statically) unknown destination at the worst possible time (a crisis moment).

Good points have been made that C-style error handling:
1. can be ignored without run-time error
2. take up your only function return value.

Newer languages seem to be eschewing exceptions in favor of a more C-like approach that solves the above two issues based upon tuple-like return types ("value or nothing", or "value or error code") that have to be decomposed and checked for error explicitly to receive return values. This also allows the compiler to guarantee that an error code is checked, rather than relying on run-time detection. Progress is being made, but too late for C++.

My perspective is that exceptions were designed to solve a local problem (consolidated error handling) in a way that introduces a bigger problem "in the large" (on the scale of the project, rather than the function). In fact, I find that a few C++ features (especially the early ones) make this kind of bad trade-off.

Interestingly, I was recently developing an in-memory database as a library (in support of a game) and I began by using plain C. There were many good reasons in my mind to do so:

1. I would very much like to avoid exceptions and OOP (inheritance, virtual functions), probably more "features" as well
2. A C API can be used from C, C++, or scripting languages (C++ has no standard name-mangling, making linking to C++ generally unsupportable)
3. A plain C interface eliminates library-seeping-into-interface problems with boost types, smart pointer types, etc that cause version/linking issues for the main application (which may use many libraries, each of which may want different versions of the same sublibraries!)
4. probably more I don't remember

However, as the project evolved, I found *some* C++ features to be very desirable, so the project evolved to a subset of C++. The main subset I use are:

1. function overloading (application changes a type? they don't have to change all function calls in project)
2. generic programming (via templates): eliminates code duplication, works well with #1. makes for smaller code base
3. C cannot do callbacks performantly or well (lambdas via #2)

Note that I still do memory management C-style (no constructors, destructors, or methods: plain functions only). This has better encapsulation properties than C++ methods-in-a-class approach, which must declare its internally-used data types "publicly" in the header file! (& therefore must also include all headers needed to describe those types: the difference in header file size is striking)

Of course I also use C-style error handling. The API is technically C++ (for function overloading), but is kept to a plain, no-library style.

So question, Martin: did you ever consider (for nanomsg, for example) using C++ but simply limiting your feature use to a small, useful subset? Do you not find any features of C++ compelling enough?

by Daniel Tracy (guest), 13 Dec 2016 03:14
Duncan Bayne (guest) 29 Nov 2016 02:51
in discussion Hidden / Per page discussions » Centrifugal Governor or Why I am not a Libertarian

You're starting from a false premise, at least according to the Austrians, who argue that an economic 'centrifugal governor' is actually impossible:

by Duncan Bayne (guest), 29 Nov 2016 02:51
Keith (guest) 26 Nov 2016 21:16
in discussion Hidden / Per page discussions » The Cost of Abstraction

I think the cost/benefit is primarily determined by your group EVERYONE. Unlike yourself, I haven't worked on OSS projects, so haven't encountered the social and political project issues at that scale. In such contexts, where folk turn up from everywhere, with wildly different levels of skill, competence, experience … more formalism is what's required to keep the motley group together. But because we get enough of that in our day jobs and just want to get things done, I expect that's the last thing people want.

It's human to use abstractions. But abstractions are cultural. English in England uses the word Binge. But that word has no real meaning or use outside of British English.

Similarly, some C++ cultures clearly know what they expect to happen in the face of an exception, while others know and expect a different model, while some never really think too much about it up front.

Culture. Abstractions without culture. I think that's where things begin to breakdown. I think abstractions are defined within a culture.

Your inheritance hierarchy, for me, demonstrates a different problem. Using the wrong tools, abstractions included, create problems of their own.

by Keith (guest), 26 Nov 2016 21:16
Gerard Toonstra (guest) 08 Nov 2016 18:00
in discussion Hidden / Per page discussions » The Cost of Abstraction

Very insightful. I like thinking about software complexity and one of the concerns there to deal with complex software is that the design and intentions should be communicated (which means either documentation or exist as a common understanding of purpose and function).

From your perspective, it means that there is also a need to establish agreements on the levels, depth and ways abstractions in the code are formed. Indeed, I worked with software where the functions and operations weren't implemented in a messy way per sé, but the many levels of indirection, abstraction (and obscurement) made things just really difficult to read and a real tail-chaser when it came to maintenance.

Those levels can also make it much more difficult to understand the flow and the operations that are happening, because in many languages you pass references to data objects, so data gets changed in many ways.

Nice article, puts me into thinking mode again! :)

by Gerard Toonstra (guest), 08 Nov 2016 18:00
gleber (guest) 08 Nov 2016 16:03
in discussion Hidden / Per page discussions » The Cost of Abstraction

Agreed with Apostolis. Costs of abstractions depend a lot on the context.

In my opinion, social consensus is necessary only due to insufficiently good tooling - mostly type systems and compilers for them. If function specification is completely defined with it's type and validated by a compiler, then this cost of abstractions goes away. Coq, Idris, Agda and, to a bit smaller degree, Haskell have strong enough type systems to avoid this cost in majority of cases.

Another aspect of abstractions is their reusability and composability. If an abstraction can be learn once and reused over and over again, their benefit overweight their cost. If an abstraction composes well into existing abstractions (i.e. it fits well into existing ecosystem), their benefits are much higher than of ad-hoc abstractions.

In my experience, the only abstractions which satisfy all these properties and in majority of cases are net positive, are abstractions based on mathematics and when used in a strongly typed language functional languages. Such abstractions are general and apply to many areas (semigroups, monoids, foldables, monads, arrows are used by almost all software developers without even knowing them) - i.e. they are reusable. These abstractions are all about composition (e.g. pure total functions, monoids, monads, arrows, categories) between their instances and also between different abstractions (e.g. it is well understood how to lift a pure function into a monadic function, or how to fold over values which form a monoid, etc.).

by gleber (guest), 08 Nov 2016 16:03
Apostolis (guest) 08 Nov 2016 13:57
in discussion Hidden / Per page discussions » The Cost of Abstraction

Dependently typed languages trade the reduction of the cost of social consensus with the cost of creating a very strict specification and proving that your implementation abides by it.

So even in these languages, you can decide to be sloppy as in any other language because you do not want to pay the upfront cost.

On the other hand, in these languages, the specification acts as an input when you program. In every step, you know whether you are doing something wrong or not. It can even generate part of the code.

In general, I think that the ability to avoid the social consensus and its cost if you define a good specification outweighs the cost of defining the specification.

(Keep in mind that the (for ex. TCP) specification is a document that is interpreted by the human brain. In Idris, the interpretation happens by the typechecker. )

by Apostolis (guest), 08 Nov 2016 13:57

Yes, true.

I guess the thing here is that widely-accepted abstraction like bitcoin, TCP or socket API are "worth it" meaning that cost of seeking the consensus is lower than not having consensus. As for my_hacky_helper_foo() function, it's the other way round.

by martin_sustrikmartin_sustrik, 08 Nov 2016 12:41
Apostolis (guest) 08 Nov 2016 11:59
in discussion Hidden / Per page discussions » The Cost of Abstraction

Social consensus can ,though, be enabled with technical means.
Consider Bitcoin. Their social consensus is mediated with the use of the blockchain and mining.

In dependently typed programming languages like Coq or Idris or Agda, the Type of the function is its specification. Here again the typechecker reduces the cost of Social consensus on abstractions.

Since you work on network libraries, it is worth mentioning that session types do the same thing for network protocol specifications and their implementations.

by Apostolis (guest), 08 Nov 2016 11:59

~5 years programming and I think Tim is right. Understand your tools before you use them, otherwise just make your own.

by Sam (guest), 28 Oct 2016 14:44

I know this is an old post but it raises an important point and deserves a reply.

I had to live through this for one of my infrastucture projects. Given the horror stories about c++ exceptions I also decided to use error codes. It was fine initially but I soon noticed a lot of bloat in my code because of checking for error codes and passing out_params as arguments. You lose the return type of a function to a error code and that is an issue. Consider this example:

///// WITH ERROR CODE ///////
int out_param;
int code = func(&out_param);
if(code != 0) {
// do something or just return the code so it can be propogated

Compare that to a function call if we were using exceptions

///// WITH EXCEPTIONS ///////
int out_param = func();

The other problem with error codes is the empty constructor / init function that you covered in great detail. I also did the same when I was using error codes and it was painful. Switching to exceptions meant that I can throw exceptions from constructors. This also meant I could implment RAII which is probably my most favourite thing. Object, once created is always in a valid state and it simplifies a whole lot of things.

Plus I wanted to reply to some statements:

"C++ exceptions just didn't fill the bill. They are great for guaranteeing that program doesn't fail — just wrap the main function in try/catch block and you can handle all the errors in a single place."
The purpose of exceptions is the exact opposite. They ensure that an error is not missed and if you don't handle the exception, you fail fast and the process is terminated. It forces you to do proper error handling. On the other hand with error codes it is 100% on the developer to catch and propagate error codes. So possibility of a bug and missing a error condition is much higher.

"If you don't give up on the "no undefined behaviour" principle, you'll have to introduce new exception types all the time to distinguish between different failure modes. However, adding a new exception type means that it can bubble up to different places. Pieces of code have to be added to all those places, otherwise you end up with undefined behaviour."
The same problem will happen with error codes. New error codes will surface and they have to be propagated and handled in different places.

"However, what's great for avoiding straightforward failures becomes a nightmare when your goal is to guarantee that no undefined behaviour happens. The decoupling between raising of the exception and handling it, that makes avoiding failures so easy in C++, makes it virtually impossible to guarantee that the program never runs info undefined behaviour."
Catch and handle the exception or the process is terminated. Can't get any more well defined than this!

Conclusion: Error handling in general is HARD! But exceptions are a better tool for attacking the problem.

by Zarian (guest), 09 Oct 2016 11:50
John Carter (guest) 03 Oct 2016 04:28
in discussion Hidden / Per page discussions » Celestial Emporium of Benevolent Knowledge

Hmm, no, OOPS isn't about taxonomy.

It's about class invariants.

ie. Constraints on the state space of the instance variable.

And the Liskov Substitution Principle is really just saying you can use inheritance if and only if child classes state space is a subspace of the parents.

by John Carter (guest), 03 Oct 2016 04:28
Martin Sustrik (guest) 18 Aug 2016 09:31
in discussion Hidden / Per page discussions » The Awe of Cryptography

Well it's not like that with quantum computing NP suddenly equals P. There's a specific class of problems that can be solved in polynomial time (factoring to primes being one of them), yes. All of them? Hardly.

by Martin Sustrik (guest), 18 Aug 2016 09:31
Alexandre Quessy (guest) 17 Aug 2016 22:51
in discussion Hidden / Per page discussions » The Awe of Cryptography

What about when quantum computing will be readily available? It seems to me that the current cryptography techniques will be useless compared to this computing power. Hence, crypto-currencies will also become worthless.

by Alexandre Quessy (guest), 17 Aug 2016 22:51
Apostolis (guest) 29 Jul 2016 23:16
in discussion Hidden / Per page discussions » Debt Cancellation Referendum

Well, the debt level will never reach to equilibrium, debt needs to increase to sustain the profitability of the companies.

But despite our disagreement on this, and the shortage of actual historical arguments by Tobias Stone,
he is right.

And your solution is exactly what we want. We need to show people that they matter and that they can perform actual change.

by Apostolis (guest), 29 Jul 2016 23:16

Great post. Sorry to raise the dead.

The Apache License is often superior to 2-4BSD and MIT for freemium/commercial open source projects, especially related to patents and intellectual "property" and it's GPL compatible on the libre-side of things. For a few examples, Android, Swift, Puppet, Cloudera/Hadoop and obviously many other Apache projects use it. BSD and MIT are simple but don't sufficiently watch out for customer concerns. Finally, nearly anything which gains even modest success in the realm of distributed infrastructure will probabilistically encounter patent trolls demanding money for bullshot claims… better have your ducks sexāgintā quattuor aligned.

by Anon7 (guest), 19 Jul 2016 16:35
Daniel Kubec (guest) 13 Jun 2016 01:18
in discussion Hidden / Per page discussions » Coroutines in C with Arbitrary Arguments

Easily fixed :). Use the real compiler on windows gcc/clang with C99 suppport.

It's quite sad that VC compiler (c)2013 does not support 1999 standard.

by Daniel Kubec (guest), 13 Jun 2016 01:18
Dave Crossland (guest) 06 Jun 2016 23:15
in discussion Hidden / Per page discussions » Software Licenses and Failed States

reqshark, fortunately the licenses are not contractual, but have a basis in copyright instead, so they do not need consideration.

by Dave Crossland (guest), 06 Jun 2016 23:15

Yeah. When explaining the mind-bending qualities of cryptography to lay people, I often use a related example:

Can you, without any prior communication, share a fact with someone when there's another person listening to the whole conversation?

Sounds impossible, right?

Now imagine I want to check whether you speak Spanish. I'll ask you to translate several words and after 20 or so of them I am pretty sure you know Spanish.

The eavesdropper has no idea though, because he cannot be sure that we haven't agreed on the list of words in advance. All he can know is that you know 20 Spanish words.

by martin_sustrikmartin_sustrik, 22 May 2016 06:16

It happened to be encrypted using OTP.

by martin_sustrikmartin_sustrik, 22 May 2016 06:05
page »
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License