Discussion in 'Business & Enterprise Computing' started by elvis, Jul 1, 2008.
No idea esp on the trainwreck record of pulseaudio
I honestly don't fucking know. I genuinely think he just did it through pester power. There seems to be a concept in technology that anything old is bad, and he successfully advertised himself as the only person who cared about init being "broken" because it was old.
Maybe it's a generational thing, maybe it's an "elvis hates millenials" thing, I dunno. But systemd is a solution looking for a problem. I just think that there's not enough greybeards left willing to fight the good fight any more to stop shit code making it upstream.
Open source is great. I live by it, even with its faults. But like free speech, which it is modelled off, the glaring downside is that any idiot with a keyboard and an opinion can submit code. Normally the whole concept of a meritocracy prevents the shit code from making it anywhere past someone's own personal github account, but for some reason we're seeing a lot of really dumb things make it public lately.
And then to make it worse, everyone else follows suite in some sort of misguided attempt to homogenise the whole thing, which is one of the things that drove me away from other operating systems that limit my choices.
So maybe it's just what happens when Linux stops becoming about nerds bashing out code, and starts becoming driven by pointy haired bosses and suits.
The thing is, I don't want Linux to stay "basement neckbeard" forever. All I want is the people writing code to have a little bit of experience in general computer science (not even "just Linux", but the larger history of computing) and understand that shiny shit doesn't always equal good shit. But lately I just see the whole industry, not just Linux, devolving into this stupid consumer-focused shiny gadget worship that's destroying things that were once reliable.
It's all honestly very fucking depressing.
2017, Year of Linux on the Desktop.
I know elvis has his philosophical reservations (and validly so), but I'm probably in the camp where the change has been a good thing. It really was the best system out there, if you're willing to give up the legacy way of doing things.
I'm probably also in the camp where some of the complexities exceed normal workstation use also, being reliant on customised changes and software / container startup.
Systemd makes the actual config files easier to implement and certainly the dependency side far easier (especially cross-distro). It's more than just the init side of things, which is where the hesitations stem from.
It's basically a more feature packed version of launchd (ie OSX) but also makes it easier for package maintainers who work on cross-distro stuff.
The desktop is dead, dude. Cloud for lyfe.
This is Poettering's MO - clone OSX badly. All of his tools are OSX copies.
And there in lies the problem. OSX does a shit job of it, with everything designed for single user. Even if you do an amazing job at coding these things, you're taking terrible, single-user concepts and porting them to an OS designed for large, enterprise, multi-user workloads.
My Linux laptop boots 10 seconds faster with systemd. I don't care, that's not why I use Linux.
To be fair though, launchd is quite simplistic.
Systemd isn't single user scenario at all, in fact it's better suited for complex environments. We use cgroups quite heavily so it's a good fit here.
The upside of doing things more efficiently can be speed, but it wasn't the driving feature. When dependencies can be better managed and tracked, it means a better startup optimisation.
This is often pushed as the major selling point to systemd. However it ignores the fact that dependency management within sysv-init could have been added without migrating wholesale to a different init system.
And this is somewhat my point. Systemd is what happens when you throw the baby out with the bathwater. And that approach to problem solving seems more and more common in today's all agile/devops/millenial-coder world.
From my biased perspective, I've yet to come across an actual business problem that systemd solves. Conversely, I'm now at a dozen problems it's created that I have had to solve, which has costed my employer time and money.
Well I guess its not so much "why did systemd win on distro X" its more that "why has it won, seemingly everywhere"? Arch is the only thing I can think of that hasn't cut to it - and well its not a "business" orientated distro.
anyway, time to pre-populate ~40 Hyper-V VM's in prep for this weekends move.
At best, the dependency addons for sysv were very simplistic and didn't work effectively with different distributions. I was running Supervisor simply to work around some of the init issues, which mean a separate system again. It was far from ideal, but the standard tools didn't provide enough dependency management to do it repeatably and seamlessly.
I've found it to simply and provide greater consistency, which as someone supporting a number of different distros with a small team is a godsend. It's simplified our dev workflow and issues when trying to work cross-distro. That's why I can overlook any philosophical differences because to me there is actual benefit.
It was far better than anyone else had. Most of the distros agreed that the maintenance of the init systems was a headache, so a replacement was very welcome. systemd gave all the benefits of Upstart (Ubuntu's replacement for the init side of things) and added more flexibility and more control. They all had faults or nuances, so it was more about the best way forward (even if it meant initial pain).
Linus himself probably sums it up best:
Binary logs I dislike and have had journal corruption... but at the same time logging to the server itself I dislike also. My logging is centralised and indexed via an Elasticsearch cluster, which when you have anything more than a couple of servers (we manage many hundreds) is a godsend. It means the only time I'm going to really need local logs is when there are catastrophic issues and those times I'd certainly prefer flat, plain files.
But as Linus points out, it's a bug bear but certainly not enough to make me consider anything else.
yeah I never got the log issue tbh.
As fast as grep/awk/etc are (with all their cheaty/smart programming behind them) - they simply aren't *that* fast when you start talking hundreds or thousands of servers.
I thought in your gig, you took it out the proverbial back and put a bullet in it - then spun up a new one?
I don't want a clever log system on the local machine that can scale to "hundreds or thousands of servers". I want a log system on one machine that I can scan through even when the system is completely fucked and without special tools. Jesus, there are times I've literally had to run "strings" over a totalled hard disk just to fetch out blocks of readable ASCII to try and figure out what went wrong.
If I want to aggregate logs from "hundreds or thousands of servers", I have a multitude of other tools to do that for me.
I know people get tired of hearing all about the UNIX philosophy and modularity, but it's a good philosophy. Do one thing, and do it well. Local logging should log locally, and do that well. Aggregate log tools should do that well. I don't want an aggregate log tool capable of enormous feats loaded on every piss-ant little system.
Just write to a text file locally, and send a copy of that over syslog to something else. That something else can have the biggest, baddest, coolest binary do-dad with all the realtime compression and elastic search and logstash and splunk and whatever the fuck else it needs to do. That's hella awesome. Just do it somewhere else, not on the local host.
And yes, there are exceptions to that rule (next-gen storage, for example, where breaking it was necessary to make things more reliable as old assumptions about hardware were no longer reliably true). Systemd adds nothing by breaking the rule for logging. World+dog in real business turns binary logging off.
Even a business with millions of cattle has at least one pet somewhere. (Your "cattle dog", to suit the analogy perfectly).
I agree with Linus. I think the difference is those "details" affect me directly (versus Linus, who is not an enterprise sysadmin). IMHO Systemd shipped 3 years too early. Someone with a clue should have been put in above Lennart, slapped him around for a couple of years, and "enterprise-proofed" his stupid ideas.
This is the same kid who came up with Network Manager, Avahi and Pulse Audio. Every single one of those was completely fucked for years too, until quality developers reigned the projects in and stopped them being so stupid.
Lennart is a passionate kid who's full of potential, no doubt. But as the old saying goes, with potential and $3.50, I can buy a cup of coffee.
An argument that comes to mind is that if you compare syslog in 2016 to syslog in even 2000 - there is a significant increase in noise.
Maybe others have come to the conclusion that the time has already come to move on from "special tools" such as grep etc because logs are already unweildly.
but what the fuck would I know? I'm a Windows guy who has had Binary logs since forever and doesn't give a load of fucks about it - MS changes the world every 6-8 years and we just bitch and whinge pretty hard about it for a while, linux guys go "HAHA M$ Sux0rs" for a bit, then we get on with it, the 3rd gen of whatever the fuck they were doing comes along and its pretty great.
Meanwhile I have a client with a broken AD and we don't have the DSRM password.
(yes I know how to fix this - its just great to hear at 6pm on Monday evening).
awk etc are great one time parsers or background parsers, and it's not speed that you'll want to look elsewhere (they're still very fast!). The reason is volume and being able to visually filter down the data you want. Indexed data makes it trivial to do so, especially if you want to graph / measure the frequency of certain events.
Yes... but as elvis has hit on everyone has pets (and loads of clients have very special pets). Any fault needs to be diagnosed to determine a root cause anyway, which means trawling through logs Underlying infrastructure is fault tolerant, so I don't have to panic... but a client's VM with issues means they panic.
The visualization of data has lead to the increase in noise, now management want pretty graphs they don't understand instead of walls of text. More pretty pictures to show the board.
I don't consider that an issue at all. Here's why:
* Storage has gotten bigger
* Compression has gotten better (logrotate to xz instead of gz)
* Filesystems have gotten smarter (zfs, btrfs, inline compression "for free")
Again, this is the modular approach to things. Loggers should log. They shouldn't get all smart with data storage. We have other tools that do data storage better.
I keep banging on about the UNIX philosophy. "Do one thing, and do it well". The very moment a logger tries to do something clever with the data it stores, I can 100% guarantee you it will be worse than any option I can tailor build to my business need from thousands of small, existing, built-in tools.
Developers are renowned for not looking outside their microcosm for solutions. UNIX is nothing but a large collection of small tools. The moment you try to make your tool bigger and do a bunch of stuff you think is "cool", all you're doing is ruining someone else's much smarter approach.
Keep it simple, stupid.
It's now faster for me to xzgrep compressed files than it is to grep uncompressed files. And xz gives me some bloody ludicrous compression rates on plain text (more so with lots of repeated words). Really, not an issue on a single system.
And again, when you're aggregating things to a special server, that's a different tool. Don't force your big aggregate binary tool thing on every host in my network. Just collect syslog like everything else, and collate it in one spot to do your black magic on. (And now you know what Splunk and Logstash do).
Wasn't the whole issue with systemd that it tried to do too much at once, introducing various potential issues if it wasn't done well? Was it you Elvis that put up a good blogpost discussing the issues with it?
I've been away from here too long
Well fuck me.
So the server has been silently corrupting data for about 11 days (Its a Cisco C2x0 box) - I strongly suspect the raid card or backplane in it is dead/dying.
I had to insert a DB from the 19th to get AD to post (recovery unsuccessful).
Exchange happily fixed its DB, but the Sender Reputation DB (really? because you're shit at this Exchange...) also shat itself.
Finally get all this up and running - start spinning up Server 2012 R2 and Exchange 2016 to migrate them off this toxic hardware - and they get crypto'd whilst i'm writing up an incident report.
Talk about fucked luck. Jesus Fucking Christ.
TO THE BACKUPS MOBILE DAD.
You just have to laugh.