I really hate installing Java on my personal machines, but I still want to develop Scala. Part of the Docker promise is that you develop on the same environment as your production environment. Let’s actually set that up.
Add this alias to your $HOME/.bashrc file. Reload your bashrc, and the sbt command will run the docker image in your current directory.
What if you want to serve html from a random directory? Create another alias for the Apache docker image! This works great for any code coverage libraries that output to html.
There’s too much information to consume. We need to be tactical about advice we take. Why give any of your time to someone horrible?
“But, Jim”, you say. “If the person is giving good advice it doesn’t matter whether or not they’re a bad human being.”
Do you really want to be that person’s editor, searching for the good content in the bad? Whatever good advice you find is aligned with their philosophy. If you don’t agree with their philosophy, you’re going to need to hear that same advice from multiple people before you can trust it.
For instance, why is James still in the news? Who is taking his advice?
I don’t remember where I first heard of him, but I realized his advice wasn’t for me after reading a profile on him that I can’t find. (I’ll never find the original article. His SEO game is on point) There are two stories I distinctly remember. When James’s first child was born he would wake up early and go hide in a coffee shop. I can find some proof of this story in this Mixergy transcript. James is very proud of the second story. He secured a funding letter of intent for an idea without the implementation or the ability to ever implement it. He then negotiated a deal to buy a company in that idea-space and used that company as an asset to raise money.
All of his takeaways are wrong. In both stories the true moral is lost on him.
Update: Another Example after Only Eight Days
"Kids who play video games do better as adults"  summarizes the studies that show you should let kids play as many and as much video games as they can stomach 👍 https://t.co/DUNg0pJ7K4— DHH (@dhh) February 9, 2018
I agree that that play time limits are dumb, but Penelope Trunk isn’t the best person to take advice from…
- Getting started with Kinesalite by Gavin Stewart - Gavin’s guide gave me the first clues that I needed to make this work
- Akka Stream Source.actorRef
- Akka Alpakka AWS Kinesis Connector
- awscli kinesis commands
At work I need to move an application to Kubernetes, but some of its logs are necessary for user usage data and tracking ad impressions. Setting up rolling logging from our container to AWS S3 looked more complicated and risky than our current setup, so we didn’t even investigate it. Other applications at the company use Amazon AWS Kinesis. It made sense to do the same. I wrote a code snippet to push log messages to Kinesis via an Akka Stream. I could get everything working in the REPL except for the part that pushes to Kinesis.
I tried to use kinesalite, an application which implements AWS Kinesis, but there aren’t any guides for getting it up and running. I assumed you would just start kinesalite, point your Kinesis endpoint at localhost with the right port, and it would just work. I did that, and nothing happened. No error messages, no kinesalite log messages, nothing.
It took way too long (two days) to figure out how to write to kinesalite by….
Writing to Kinesis
Below is my example code to write to Kinesis. It creates an Akka Stream to which you can send messages. I tested each part of the Akka Stream in the Scala REPL.
Setting Fake AWS Credentials
Either the AWS Java SDK requires credentials (my bet) or Kinesalite requires credentials even though it doesn’t care what those credentials are. Create a $HOME/.aws/credentials file with the below contents.
This was the last step I completed to get Kinesalite working. Neither the AWS Java SDK or Kinesalite showed a single error message when trying to connect to Kinesis without authentication credentials.
Install the AWS CLI Tool
You need this to setup Kinesalite.
Creating Your Stream
I didn’t know you had to do this. Ops created our staging and production streams. I expected Kinesalite to accept requests for any stream, but I guess it behaves exactly like AWS Kinesis.
Run the AWS CLI tool with the following parameters. It is super sensitive to typos. I copied and pasted from examples without any noticeable spelling errors. The only messages it will give is something like “–stream-name is requires”
The first command creates your stream. The second command lists all existing streams.
Send Messages to Kinesis
In your REPL, send something to Kinesis.
Verifying the Output
Two parts to reading what has been pushed to Kinesis. First you need to find the shard iterator to read data from the stream.
Once you have the shard iterator, you can read all of the records since that iterator. Replace the –shard-iterator with the one in your output.
Your record is a base64 encoded string. The following scala snippet will decode it back to what you pushed to Kinesis.
Sidenote: Keep Your Libraries up to Date
Sending a Kinesis request for every log message is inefficient. There’s an Akka Flow called groupedWithin that lets you batch your requests by either a number of elements or a timeout. If you don’t reach the element limit within your timeout, groupedWithin will flush your batch. Even better there is groupedWeightedWithin which lets you specify a weight. Kinesis has a 1MB limit for its payloads, so we can batch our requests until we get close to 1000000 bytes.
We can’t use groupedWeightedWithin. Our application is still running on the Spray web framework. The latest Akka it supports is 2.4. The groupedWeightedWithin function wasn’t introduced until Akka 2.5.1. We’ll have to wait until we upgrade our application to Akka HTTP before we can use it.
If we kept our libraries up to date, we would have access to groupedWeightedWithin.
- Our current log roller was written by someone who doesn't work here anymore, and it's not dependable at all. It's based on logback which is designed to fail itself before it ever fails your application. One time we had a bug that could result in two instances of our application running at the same time. Only one of them was bound to the port that received connections. Our log rolling library would lock the log file during the roller over. Inevitably, the application not receiving requests would roll the logs and lock the other application out of ever writing to the log file.
Written while watching tv for Phoebe’s work’s secret Santa. Recorded for next year.
Every time I upgrade Postgres major versions, I need to google for these steps. This usually happens after I’ve run brew upgrade and my database stops working. Here are the steps for future reference.
I was working on a personal project and feeling guilty about not writing enough tests. The two are different enough that no one should feel guilty for skimping on tests for their personal project.
Once a service is running in production, it never gets turned off. Decommissioning a service always ends with realizing some unknown party was using it and still needs it for their job. The standards for testing should be higher.
You’re not the only person working on your project. There’s no better way to communicate to your teammates how you expect code to behave than good tests. Having a unit test for a ticket makes it harder to reintroduce the bug.
If you’re starting a new service from scratch, testing should be a part of your project from the beginning. Pick frameworks that are easy to test. At work, we don’t use any Go router frameworks. They’re a huge pain to setup before each test.
In your personal projects, you’re the only person working on it. If you stop, it’s more likely to be abandoned than passed to another developer. It’s hard to justify tests if the project might not last a year.
Unless you’re a pro at project management and have strict requirements, your side projects are going to have higher code churn than at work. How can you justify writing acceptance tests for a web page whose layout is constantly being tweaked?
Test as much as you can, but don’t beat yourself up over it.
My dream for the internet of things is a bunch of different devices coordinating with each other. My air conditioner, humidifier, and dehumidifier should all work together to keep my apartment climate controlled and prevent me from ever having dry, cracked hands ever again. Having connected devices work together feels so far away. As we baby step towards my dream, here are some rules all internet of things devices should follow.
Any services need to last as long as a dumb version of the device would
Washing machines and refrigerators can last ten to twenty years. Your 1985 Nintendo probably still works. Electrical components don’t degrade like mechanical components do, so the internet of things devices need to last at least as long as their mechanical counterpart.
Devices should still work without internet
All devices need manual controls. If the internet is out or if a storm is making the internet unreliable, the smart device should still be useable. Your Juicero doesn’t need to connect to the internet to squeeze a packet of juice.
Devices need to be repairable or modular
If the wifi on my refrigerator breaks, I can’t take it into an Apple store without renting a truck. Even worse, I probably bought an LG fridge, and LG doesn’t have stores at the local mall. The smart parts of appliances need to be replaceable by the owner or be a separate module from the appliance. Also, if I detach a smart module from my refrigerator, the refrigerator should still work.
Devices should be useable while updates are installing
Servers store two versions of their firmware, the latest update and the previous update. When you install a firmware update, the server overwrites the older update and boots from that location.
Services need to be open source
To ensure it lasts, the devices, their api, and the hubs that control them need to be open source. Ideally all companies would open source their device software when that device reaches end of life. That’s never happened, so we need internet of things devices to be open source from the start. If you can’t recreate a device’s server features in AWS, it’s worthless.
Devices need to work with multiple brands of hubs
We can’t have an app for each device. Devices need an easy to use API and support for the most common hubs.
They need to be secure (No default passwords)
Configuring a smart device should require pairing with the user’s computer or phone. If the device requires connecting to a service, make the user create an account or tie the account to the phone app. Unconfigured devices shouldn’t be allowed access outside the user’s home network.
Stop with the worthless metrics
Internet of things devices need to improve your life instead of measuring it. I don’t need to know how much water I drink every day. I don’t need a smart pillow or a smart bed. Not now. Not ever.
Not everything needs to be connected. Somethings can be dumb.
Design for our wired future
It’s maddening to have expensive light bulbs with wireless chips instead of expensive lamps and ceiling fans and cheap led bulbs. I get that it is easier to convince people to try out smart light bulbs when they don’t need to rewire their home, but the market has been proven. People love programmable light bulbs that can change color. Light bulbs are going to burn out. They shouldn’t be expensive. They shouldn’t be wireless.
We don’t need more wireless things. All of our wall outlets are going to be USB-C someday. Someone needs to start building the products that take advantage of our fast, low powered, wired future.
The power is out. I want to write a blog post, but the only thing worth talking about this week is Susan J. Fowler’s story about her awful year at Uber.
My friends will share her story commenting that “if this is true, it’s damning.”
I’m convinced it’s true.
Even at our favorite companies senior managers will harass female employees and those companies’ employees eventually find out that HR exists to protect the company from liability and not to help them.
Hiring people with a strong sense of entitlement very rarely works out, no matter how good they are on other axes— Sam Altman (@sama) November 30, 2016
The absolute opposites of an entitled employee are the apprentices in Jiro Dreams of Sushi. The apprentices have to work for ten years making rice and doing other menial tasks before they’re allowed to make a dish. Jiro’s desire for perfection really shows his dedication to his craft.
The film paints a portrait of an exacting patriarch who demands perfection from himself, his sons, and the hard-working apprentices who work up to 10 years before being allowed to cook eggs.— Influence Film Forum
Although… why would anyone put up with that? If you were even a mediocre chef, why would you toil under Jiro instead of creating your own restaurant or being a more senior chef at another restaurant? Only two types of employees will apprentice with Jiro, the sushi fanatics and the people who can’t find a job anywhere else.
Sam Altman wants employees who have no where else to go.
Scala finally clicked. It took a year of writing production code with it and completing Coursera’s Functional Programming Principles in Scala course (lucky me, it’s taught by the creator of Scala, Martin Odersky). When I got to the programming exercises in the Scala chapter of Seven Languages in Seven Weeks, I wanted my code to follow every Scala principle. I wanted to only use immutable variables and use functional programming components instead of for loops.
Here’s my tic tac toe code in its entirety. You can run it using Scala’s REPL. I didn’t want to spend an entire weekend optimizing it, so this is the first complete version. I’ll tell you why it sucks at the later.
I’m pretty proud of it. It highlights the best parts of Scala which are the case classes, match statements, and decomposing an object in a match statement.
Case classes are like fancy, extendable enums. They can inherit from other classes, override default methods like toString, and have new methods. We can create a fake-enum type by having our case classes inherit from a Player base class.
Case classes can have constructor arguments. This lets us associate a value with the enum.
Match statements are like fancy switch statements. They evaluate in order and can include additional logic. In Scala the underscore is the we don’t care character. In a switch statement it will always evaluate. Because it’s always evaluated, it should always be your last case match.
Decomposing an Object
Match statements can also decompose an object into variables for you if it matches a specific format.
In our move method, we use a match statement to split a List into its head and its tail. Outside of a match statement the line head :: tail is appending head to the front of the List tail using the :: operator. Match is checking if your variable could be deconstructed to match this pattern.
We check that the head, the Tile at the specified location, is empty. Then we insert a new tile. Any operator beginning with a colon is right associative. head :: tail will insert head at the front of the List tail.
Below is the same code refactored to show which variable is calling the method.
Maybe our board should’ve been a Vector of Tiles instead of a List. Lists in Scala are linked lists. Linked lists are great if we want to have immutable collections, but it’s kind of hard to think about when it’s new to you. Vectors would’ve let us access tiles by their index and simplify code.
For example, we need to iterate through the List to the tile’s location to check if a it is empty or not. This is only optimal if the user’s input is valid. If the input is invalid, we have to iterate through the list multiple times. If we used a Vector, we could create a separate method to check if a move is valid instead of trying to save operations by shoving the check into our move method.
Using a Vector to check if a move is valid would fix another problem. Our game loop is recursive. Because we catch invalid input for the moves in a try catch block, it’s not tail recursive. When we call playGame, the code in the catch block could still execute. Tic tac toe has at most nine moves. If our game was more complex, we could actually cause a stack overflow.
Overall, I think using immutable values made things more complex. I spent most of my time writing the gameState method. I wanted to use Lists and foldLeft to build the results. If I allowed myself to use mutable values, I would have just used a for loop to check each of the rows, columns, and diagonals for a winner.
- Read Evaluate Print Loop. The best parts of Ruby and Scala are testing code snippets using their REPLs. Every programming language needs one. I don't know how you can code without it.
- Let's say you're decomposing a tuple. If you only care about one of the values, you can use the underscore to ignore the other
- Outside of my tic tac toe code, which is entirely my bad code, Scala has issues. Its immutability and static typing is a huge pain when you're trying to parse JSON.