Apple uses spite to force planned obsolescence. Watch $750 tier 4 repair performed with $2 in parts.

No I don't, but my company had a local apt sources which was being pulled and maintained regularly that I could access. That is more likely to be updated with the latest packages for my multiple Linux environments then some custom docker build source.

woosh - You have once again failed to grasp the point. The point is not pulling the latest sources- any idiot can do that. The point is you have absolutely no idea what may break when doing so. Which means everything needs to be tested when there is an update. And that is a LOT easier to do with containers.

And furthermore- your understanding of docker is so flawed it boggles my mind. Just what do you think the build process looks like? My app has dependencies. Those dependencies are specified somewhere- a make file, an sbt file, a maven file, whatever. Whether I update the dependencies in that file, or do it in a dockerfile makes no difference. It's an equal amount of work. The difference is the docker one is much much easier to test.

But as I said- I wouldn't expect you to understand :)

Except you are missing the problem.

No my dear- it's you who are missing the problem.

A classic example is also the application system on Android and the like. When you bundle in a common dependency like Webkit, and htat dependency doesn't get updated then that application has a potential exploit.

Gee- is that how that works? Golly!

It doesn't matter if hte application is sandboxed yes the other apps are safe. However, that app is fundamentally insecure and will never be updated until someone decides to update it manually.

Again- the primary point of containerization is not security- it's more of a side effect- but you're still completely wrong about "someone decides to update it manually" being a problem.

Information Security is represented by the CIA triad. Automatically upgrading your libraries constantly might help with the Integrity aspect- but your availability will go to shit. Security is a balancing act and your suggestion is anything but balanced.

Let's take your example- Webkit needs to be updated. So what? The very next build will trigger an automated warning that a dependency we're using has a vulnerability. We read the vulnerability and then we either update the base container, rebuild, and redeploy our apps (something we do multiple times a day anyway), or we update the parent pom/sbt/whatever file and then rebuild and redeploy our apps. We don't willy-nilly update everything on a whim- that's how you get wildly unstable infrastructures. Because everything is containerized- the updates are limited to the app that needs them and we avoid collateral damage.

Since our infrastructure is constantly being redeployed- any vulnerabilities will automatically be caught during the build process and it gets take care of. That is the beauty of it. Each container gets built, tested (including security checks), and deployed- no fuss- no muss.

Now let's try the same thing with a VM. We update Webkit which also happens to update glibc (or any one of a hundred other libraries). The application that uses Webkit is fine with the new version of glibc- but another application on the same box can't use it and crashes. Now you have an environment that works for one application- but not for another.

Your availability just went to shit- and you have an unstable environment.

A linux VMS i just need to update hte apt sources whilst each individual instance would have it's own autostart utility like pm2 for node which reruns the tests and deploys.

Seriously dude- you are in for a world of hurt when you get out into the real world. If your application stack survives a week like that I'd be stunned :)

Try looking up immutable environments and why people use them.

If your entire datacenter gets destroyed in a disaster- how are you going to redeploy? Roll out a bunch of VM's- hope you patch them to the same level (after all- maybe someone updated the apt sources in the interim), hope you tracked every configuration change perfectly, and that no one made any manual changes- and then roll out your apps and pray? Good luck with that.

Or- you can practice immutable infrastructure and blow away your instances every night and redeploy with the current build. At all times you know you can recreate your infrastructure in the event of a disaster- you know every single app will work correctly- and you know your upgrade process works because it's tested constantly.

All joking aside- you have no clue what it takes to operate a real software environment at scale.

/r/videos Thread Parent Link - youtu.be