Since the client has not received an ACK for its packet, shouldn't it wait for it anyway? After all, the FIN and ACK could always have been reordered in transit, so the ACK could still be coming.
Would using two descriptors for a binary stream (TCP) work, as mentioned in https://cr.yp.to/tcpip/twofd.html ?
Because FIN means that server has closed the server->client half of the connections. No more bytes are going to arrive.
I noticed this blog talks about technology a lot, so let's phrase this in technology terms. How about running society in a fully distributed fashion? There's about 477 million people in North America. It's like a 477M core CPU!
We can distribute to all those CPU cores all sorts of tasks:
1. What should I eat? (get rid of farm subsidies.)
2. How should I educate my kids? (get rid of local/state/federal taxes for education.)
3. Where do I find the funds to provide housing? (get rid of HUD, Fannie Mae, tax breaks for mortgage interest.)
By the time you eliminate the central command and control megalomaniac OS (taxes and regulation) there will be so many resources available we'll have much more prosperity as a whole.
Of course, some CPUs will screw up. That's what private charity is for (another massively distributed system that also does not need a central command and control system.)
"But look at the client now: It's waiting for responses and considers incoming FIN to mean "no more responses". But, actually, there were responses! It was just the server was unable to send them."
How can the client assume that there were "no more responses" when it hasn't gotten an ACK back?
Let fools complain.
It seems obvious you need some form of application level acknowledgement: receiving data is no guarantee it will be processed, for a multitude of reasons (which may have nothing to do with networking). Because one can pull the plug on a server, does it means TCP is unreliable? If your message queue is dead and all packets go round and round in a circular buffer?
The protocol is well-named: it ensures reliable transmission. If you get an ack, well the server has the bits on it somehow, whatever happens they haven't been lost in transit. At this point the important thing to ensure is that either the packets hit the application OR the connection is shutdown (orderly or abruptly). In other words, what we really want is (1) avoid is somehow losing SOME of the packets between the NIC and the application, (2) if packets never hit the application, the connection should probably die at some point (although even this can be argued).
People that don't understand this shouldn't be allowed to touch TCP, and much less listened to.
This probably sounded way too much like a lecture — force of habit — but I realize you're probably not the one I need to convince.
The thing is, I believe just adding app-layer acks is as clean and conceptually simple as it gets. Do you really think muddling with the boundaries will help, or is it just addressing complaints?
So what do you propose? Sure, server can drop packets after it called shutdown(). However, if it does so and sends FIN to the client, client will believe the packets were processed and will complain that "TCP is not reliable".
In the second diagram, I would argue that the server code is wrong: if it can't send responses anymore, it should not process packets either.
The only reason the server should still receive packets is to get answers to requests it sent to the client before shutdown(). It stands to reason that the protocol layered over TCP should be designed in such a way that if you shutdown on the server but still read packets, you should ignore new requests from the client, and answers to your requests should not require answers themselves (or those can be omitted without further complications).
If you think that servers must be able to process requests after shutting down, then that should probably be substantiated better.
The value of these lines is as incentive, providing psychological comfort/sense of fulfillment to the developers, so that they can keep maintaining the parts of the code that do the actual work.
What is really more fascinating is that the very same people that decry the contrived complexity of the codebase, tend to rationalize it as they start groking it (sense of fulfillment), and then typically leave their mark by add yet another distinct layer of abstraction (psychological comfort). Rationally challenging, or (gasp) removing useless abstractions is looked upon as act of hostility to whoever added it.
I think your argument is about structuring state machines, not that much about select. Without indepdependly schedulable lightweight threads, one is forced to conflate logically hierarchical (or peer) state machines all into one. I agree with you that with Goroutines, many state machines can be broken up into tinier, hierarchically related ones.
However, you still need select, since it is the equivalent of nondeterministic choice in CSP, and it is a primitive in the latter for a reason. So, point #2 just before your conclusion is not always possible because there is no linearization there. Independent signals (timeouts, cancellations, peer failures, aasync i/o completion etc) are all possibilities, and they may or may not happen. You have to account for a powerset of the possibilities in any execution.
The suggestion of putting all such signals into a single channel to avoid select is fraught with problems:
1. The receiver dictates the type of the channel to be the union of all incoming signals, so the source has to know details that it is completely unconcerned with.
2. Erlang has the model you suggest. At the same time, the language has built-in support for selective receive (which is select in a different form), yet head of line blocking gets in the way more often than you think. In Go, you have to build selective receive yourself, but the essential problem doesn't go away.
3. Flow control affects all sources. There are good arguments against unlimited buffering, and bounded buffered channels impose flow control: one busy source can basically keep out another source feeding to the same channel.
4. This scheme doesn't get rid of nondeterminism anywway, because the first thing the receiver must do is to demultiplex the message and process it. It is identical to the select code.
in discussion Hidden / Per page discussions » Why should I have written ZeroMQ in C, not C++ (part I)
I like your reasoning about exception handling and crippled initializations. May I ask though, why do you say Java is messy?
Yet, so many sociology / economics papers do not take evolution into account :
Axelrod's paper on the evolution of cooperation has 33000 cites, but its argument is not evolutionary but
Find the error in the above paper and get a digital cookie.
The probleme with these points is that what you call "democracy" has never been democracy. Democracy not exist in modern nation bacause The Founding Fathers of USA and most influent people during French Revolution has never be democrate. On the contrary they describe democracy like chaos and Plebe dangerous power;
The word democrate is during dark age a rare word and used by few erudite and describe Ancient Greece political model. The using of word increase during modernity but the original sens has been perverted.
Worst, the Old Regime of France during monarchy is more democratic than modernity. The King of France not really manage his kingdom. He prefer fuck woman and hunting. Their is a great autonomy of township with votation of things with hands or black and white balls. Even during plague people made assembly across river.
Francis Dupui-Déri (Quebec)
Démocratie Histoire politique d'un mot aux USA et en France
Cornelius Castoriadis - Une leçon de démocratie (Chris Marker 1989)
Consider this for your next chapter.
A. The software is rewritten in a literate programming style, like a book that can be understood by people.
B. Create an issue tracking system, that is easy to use by society in which society provides feedback.
C. Provide a "croudfunding" site where society takes decisions about and funds changes in the governing structures.
D. Have programmers do the changes.
This might look like fiction but it is exactly what I am working on. So maybe people will call you the next Jules Verne after that :)))) .
Hm, it's a fantasy, don't take it too seriously. My point was to attack the idea that the democracy as we know it is, in Winston Churchill's words: "The worst form of government, except for all the others." In reality, if you look at the system from system-engineering perspective, it's easy to imagine piecemeal improvements to the system that would make it work better. For example: More transparency. More checks and balances. Better feedback loops etc.
Who is the reviewer and who appointed him in that position?
You see, the problem is the democratic change and control of the governing structures.
In the foundation series, there were scientists that were able to predict the future of society and alter its course.
I think that it is time to incorporate in our fiction that the all-seeing eye that is able to view society from above and take decisions is society itself.
One bit martin (perhaps intentionally to spark the discussion) left out. The machine can most certainly become racist, because our current ML techniques pretty much amount to "generalizations box" - it's a magnifying glass of our collective data, including prejudices - but not true consciousness making its own decision.
In practice - because of institutional racism, a lot of black people are overrepresented in crime in the US. We feed this past overrepresented data as the learning dataset and the machine creates neuron weight circuitry for "aha! black person, most likely a criminal!". Incidentally, it's the exact same mechanism how prejudice self-amplifies in the general population, too.
One possible approach could be "opinion affirmative action". Basically introduce reverse bias in the input data against known prejudices - even if it counters past statistics. And hope that it will break the previous feed-back loop of self-fulfilling prophecy (fe. assumptions that black people are criminals is what actually may set em on criminal path).
Same goes for a lot of ML systems. You won't get justice, only a coarse statistical estimation based on previous observations - but not true rational reason, unless you give it far, far more data (as well as advanced reinforced ML training techniques such as generating hypotheses from the corpus and testing those). This is far more difficult than just dumping some narrow database data from the past, yet is what most commercial "screening" oughta do, otherwise they merely automated the coarsest of human prejudice.
You consider that it is political corruption that people do not like.
Let me generalize it to say that it is political decisions that are against the majority of the people that they do not like.
Now why does political corruption exist? Politicians have political power but they cannot benefit from it individually. The only thing they can do is to sell their political power for money. If the organizations that give them money are democratic, then in a limited sense, we would have some sort of democracy.
But companies are not democratic. They are not accountable to the people for what they do. People work 1/3 of their life in them. Moreover, the Capitalist economy is in crisis, something that the people cannot control.
In conclusion, it seems that the lack of democracy in both the company level, as well as the whole economy level is at fault.
2. Not direct democracy but real democracy.
There are statistical methods to check whether the opinion of the people coincides with the decision of a governing body. We need to build evolvable decision making structures and check whether those structures take decisions that are in agreement with the people's.
If they are not, we evolve them again until we have the correct structure. Each field of inquiry requires a different structure.
I agree with most of your overall analyse but I have doubts about your proposed solution.
Many issues require in depth studies, analysis, and extended debate - something you can't expect everyone to invest into on all matters.
The complex nuances will make the vast majority of voters susceptible to "easy solutions", populism, appealing rhetorics and slogans.
Strategies involving multiple steps will be launched simultaneously just to be down voted by supporters of another strategy, leaving behind half-attempts. Each step would have to be championed even though people agreed on the overall strategy.
We would have constant fundraising and lobbyism leading to democracy-fatigue.
Smaller/frequent elections would mean less turnout which would make each vote more susceptible to both legal lobbyism and corruption.
Let's use foreign policy on the Middle East as an example. How many people do you know, have the actual knowledge about history, politics, anthropology, war, finance, strategy, relations among the countries, groups, etc., etc. in that region to have a qualified opinion? I certainly don't. Would you feel comfortable about answering a question such as: "Should we impose a sanction on X" followed by pages specifying exactly what, who, when in details only 0,01% of the population have ever heard about?
Would you want to spend weeks studying this in detail to vote qualified?
Anyone saying "X caused war, Y will fix it" does not have a clue.
I don't believe in direct democracy. Quite the opposite. I think we might need more layers - some that are closer and thus easier to hold accountable and debate with.
The first layer should be so close that you can pick someone you trust by their values to represent you, someone close enough that you can talk to them. Their votes should be transparent, and when you don't understand or is worried, you can debate with them and either be settled or move your vote.
I am not sure. Imagine LISP writing LISP. The problem is that while you can easily read the code of the generator, the generated code is spread throughout the source code and thus not easily readable.