• Another Rule for the Internet of Things

    Today I learned that my broken dishwasher could have shown me the error codes. I would pay extra for a dishwasher that could connect to my phone just to see those codes. I don’t want to wait a week for the repair people. I want to order the part and fix it as soon as Amazon Prime delivers.

    My first gen Automatic doesn’t reliably track my location anymore, but it’s worth keeping around for its error codes. There is no confidence like walking into a repair shop already knowing what’s wrong with your car.

    People love when appliances are repairable. If smart appliances told you what was wrong and how to fix the wrong, they’d be loved too.

  • Fun With Docker

    I really hate installing Java on my personal machines, but I still want to develop Scala. Part of the Docker promise is that you develop on the same environment as your production environment. Let’s actually set that up.

    Add this alias to your $HOME/.bashrc file. Reload your bashrc, and the sbt command will run the docker image in your current directory.

    alias sbt='docker run --rm --tty --interactive --volume "$PWD":/app bigtruedata/sbt'

    What if you want to serve html from a random directory? Create another alias for the Apache docker image! This works great for any code coverage libraries that output to html.

    alias httpd='docker run --rm --tty --interactive -p 8000:80 --volume "$PWD":/usr/local/apache2/htdocs/ httpd'
  • Stop Taking Advice From Bad People

    There’s too much information to consume. We need to be tactical about advice we take. Why give any of your time to someone horrible?

    “But, Jim”, you say. “If the person is giving good advice it doesn’t matter whether or not they’re a bad human being.”

    Do you really want to be that person’s editor, searching for the good content in the bad? Whatever good advice you find is aligned with their philosophy. If you don’t agree with their philosophy, you’re going to need to hear that same advice from multiple people before you can trust it.

    For instance, why is James still in the news? Who is taking his advice?

    I don’t remember where I first heard of him, but I realized his advice wasn’t for me after reading a profile on him that I can’t find. (I’ll never find the original article. His SEO game is on point) There are two stories I distinctly remember. When James’s first child was born he would wake up early and go hide in a coffee shop. I can find some proof of this story in this Mixergy transcript. James is very proud of the second story. He secured a funding letter of intent for an idea without the implementation or the ability to ever implement it. He then negotiated a deal to buy a company in that idea-space and used that company as an asset to raise money.

    All of his takeaways are wrong. In both stories the true moral is lost on him.

    Don’t waste time on horrible people. Find people who consistently give good advice. Find someone who shares your values.

    Update: Another Example after Only Eight Days

    I agree that that play time limits are dumb, but Penelope Trunk isn’t the best person to take advice from…

  • Testing Your Kinesis Stream With Kinesalite

    Resources

    1. Kinesalite
    2. Getting started with Kinesalite by Gavin Stewart - Gavin’s guide gave me the first clues that I needed to make this work
    3. Akka Stream Source.actorRef
    4. Akka Alpakka AWS Kinesis Connector
    5. awscli kinesis commands

    At work I need to move an application to Kubernetes, but some of its logs are necessary for user usage data and tracking ad impressions. Setting up rolling logging from our container to AWS S3 looked more complicated and risky than our current setup, so we didn’t even investigate it. Other applications at the company use Amazon AWS Kinesis. It made sense to do the same. I wrote a code snippet to push log messages to Kinesis via an Akka Stream. I could get everything working in the REPL except for the part that pushes to Kinesis.

    I tried to use kinesalite, an application which implements AWS Kinesis, but there aren’t any guides for getting it up and running. I assumed you would just start kinesalite, point your Kinesis endpoint at localhost with the right port, and it would just work. I did that, and nothing happened. No error messages, no kinesalite log messages, nothing.

    It took way too long (two days) to figure out how to write to kinesalite by….

    Writing to Kinesis

    Below is my example code to write to Kinesis. It creates an Akka Stream to which you can send messages. I tested each part of the Akka Stream in the Scala REPL.

    import scala.concurrent.duration._
    
    import akka.actor.ActorSystem
    import akka.stream.{ActorMaterializer, Materializer}
    import akka.stream.OverflowStrategy.fail
    import akka.stream.alpakka.kinesis.KinesisFlowSettings
    import akka.stream.alpakka.kinesis.scaladsl._
    import akka.stream.scaladsl.Source
    import akka.stream.scaladsl.{Sink, Flow}
    import akka.util.ByteString
    import com.amazonaws.client.builder.AwsClientBuilder.EndpointConfiguration
    import com.amazonaws.services.kinesis.AmazonKinesisAsync
    import com.amazonaws.services.kinesis.AmazonKinesisAsyncClientBuilder
    import com.amazonaws.services.kinesis.model.PutRecordsRequestEntry
    
    implicit val system: ActorSystem = ActorSystem("TestActor")
    implicit val materializer: Materializer = ActorMaterializer()
    
    // Create a Kinesis endpoint pointed at our local kinesalite
    val endpoint = new EndpointConfiguration("http://localhost:4567", "us-east-1")
    
    implicit val amazonKinesisAsync: AmazonKinesisAsync = AmazonKinesisAsyncClientBuilder.standard().withEndpointConfiguration(endpoint).build()
    
    // From the Akka Alpakka example
    val flowSettings = KinesisFlowSettings(parallelism = 1,
        maxBatchSize = 500,
        maxRecordsPerSecond = 1000,
        maxBytesPerSecond = 1000000,
        maxRetries = 5,
        backoffStrategy = KinesisFlowSettings.Exponential,
        retryInitialTimeout = 100 millis
      )
    
    val streamName = "myStreamName"
    val partitionKey = "logs"
    
    val loggingActor = Source.actorRef[String](Int.MaxValue, fail)
        .map(log => ByteString(log).toByteBuffer)
        .map(data => new PutRecordsRequestEntry().withData(data).withPartitionKey(partitionKey))
        .to(KinesisSink(streamName, flowSettings))
        .run()
    
    loggingActor ! "testing this thing"
    loggingActor ! "test test"

    Setting Fake AWS Credentials

    Either the AWS Java SDK requires credentials (my bet) or Kinesalite requires credentials even though it doesn’t care what those credentials are. Create a $HOME/.aws/credentials file with the below contents.

    [default]
    aws_access_key_id = x
    aws_secret_access_key = x
    region = us-east-1

    This was the last step I completed to get Kinesalite working. Neither the AWS Java SDK or Kinesalite showed a single error message when trying to connect to Kinesis without authentication credentials.

    Install the AWS CLI Tool

    You need this to setup Kinesalite.

    // pip3 if you're using Python 3 or just pip if it's properly aliased
    pip2 install awscli

    Creating Your Stream

    I didn’t know you had to do this. Ops created our staging and production streams. I expected Kinesalite to accept requests for any stream, but I guess it behaves exactly like AWS Kinesis.

    Run the AWS CLI tool with the following parameters. It is super sensitive to typos. I copied and pasted from examples without any noticeable spelling errors. The only messages it will give is something like “–stream-name is requires”

    AWS_ACCESS_KEY_ID=x AWS_SECRET_ACCESS_KEY=x aws --endpoint-url http://localhost:4567/ kinesis create-stream --stream-name=myStreamName --shard-count=1 –-no-verify-ssl
    AWS_ACCESS_KEY_ID=x AWS_SECRET_ACCESS_KEY=x aws --endpoint-url http://localhost:4567/ kinesis list-streams

    The first command creates your stream. The second command lists all existing streams.

    Send Messages to Kinesis

    In your REPL, send something to Kinesis.

    loggingActor ! "testing this thing"
    loggingActor ! "test test"

    Verifying the Output

    Two parts to reading what has been pushed to Kinesis. First you need to find the shard iterator to read data from the stream.

    AWS_ACCESS_KEY_ID=x AWS_SECRET_ACCESS_KEY=x aws --endpoint-url http://localhost:4567/ kinesis list-streams
    AWS_ACCESS_KEY_ID=x AWS_SECRET_ACCESS_KEY=x aws --endpoint-url http://localhost:4567/ kinesis describe-stream --stream-name myStreamName

    Once you have the shard iterator, you can read all of the records since that iterator. Replace the –shard-iterator with the one in your output.

    AWS_ACCESS_KEY_ID=x AWS_SECRET_ACCESS_KEY=x aws --endpoint-url http://localhost:4567/ kinesis get-records --shard-iterator AAAAA+somekeyvalues

    Your record is a base64 encoded string. The following scala snippet will decode it back to what you pushed to Kinesis.

    import java.util.Base64
    
    def decode(str: String): String = {
      Base64.getDecoder.decode(str).map(_.toChar).mkString
    }

    Sidenote: Keep Your Libraries up to Date

    Sending a Kinesis request for every log message is inefficient. There’s an Akka Flow called groupedWithin that lets you batch your requests by either a number of elements or a timeout. If you don’t reach the element limit within your timeout, groupedWithin will flush your batch. Even better there is groupedWeightedWithin which lets you specify a weight. Kinesis has a 1MB limit for its payloads, so we can batch our requests until we get close to 1000000 bytes.

    We can’t use groupedWeightedWithin. Our application is still running on the Spray web framework. The latest Akka it supports is 2.4. The groupedWeightedWithin function wasn’t introduced until Akka 2.5.1. We’ll have to wait until we upgrade our application to Akka HTTP before we can use it.

    If we kept our libraries up to date, we would have access to groupedWeightedWithin.

    Asides

    1. Our current log roller was written by someone who doesn't work here anymore, and it's not dependable at all. It's based on logback which is designed to fail itself before it ever fails your application. One time we had a bug that could result in two instances of our application running at the same time. Only one of them was bound to the port that received connections. Our log rolling library would lock the log file during the roller over. Inevitably, the application not receiving requests would roll the logs and lock the other application out of ever writing to the log file.
  • Secret Santa Matcher

    Written while watching tv for Phoebe’s work’s secret Santa. Recorded for next year.

    function secretSantaMatchMaker(people) {
      let unmatched = people.slice(),
        pairs = {};
    
      for(let i = 0; i < people.length; i++) {
        let giver = people[i],
          idx,
          receiver;
    
        // Don't let a person end up with their own name
        // Do you see the bug in this code? It won't happen often
        do {
          idx = Math.floor(Math.random() * unmatched.length);
          receiver = unmatched[idx];
        } while(giver === receiver)
    
        unmatched.splice(idx, 1);
        pairs[giver] = receiver
      }
    
      return pairs;
    }
    
    let people = [
      'Meghan',
      'Norbert',
      'Nyasia',
      'Elizabeth',
      'Mariam',
      'Lindsay',
      'Jeff',
      'Phoebe',
    ];
    
    secretSantaMatchMaker(people);
  • Upgrading Postgresql Major Versions

    Every time I upgrade Postgres major versions, I need to google for these steps. This usually happens after I’ve run brew upgrade and my database stops working. Here are the steps for future reference.

    # Step 1
    # Rename your posgres database directory
    mv /usr/local/var/postgres /usr/local/var/postgres9.6
    
    # Step 2
    # Using the latest version of postgresql, initialize a brand new database.
    initdb /usr/local/var/postgres -E utf8
    
    # Step 3
    # -d Location of the database being copied from
    # -D Location of the database being copied to
    # -b Location of the psql binary that can read from the 'from' database
    # -B Location of the psql binary that can write to the 'to' database
    pg_upgrade \
      -d /usr/local/var/postgres9.6 \
      -D /usr/local/var/postgres \
      -b /usr/local/Cellar/postgresql/9.6.5/bin/ \
      -B /usr/local/Cellar/postgresql/10.0/bin/ \
      -v
    
    # Revert if anything went wrong
    # mv /usr/local/var/postgres9.6 /usr/local/var/postgres
    # brew uninstall postgresql
    # brew install postgresql@9.6
  • Testing at Work and in the Home

    I was working on a personal project and feeling guilty about not writing enough tests. The two are different enough that no one should feel guilty for skimping on tests for their personal project.

    At Work

    Once a service is running in production, it never gets turned off. Decommissioning a service always ends with realizing some unknown party was using it and still needs it for their job. The standards for testing should be higher.

    You’re not the only person working on your project. There’s no better way to communicate to your teammates how you expect code to behave than good tests. Having a unit test for a ticket makes it harder to reintroduce the bug.

    If you’re starting a new service from scratch, testing should be a part of your project from the beginning. Pick frameworks that are easy to test. At work, we don’t use any Go router frameworks. They’re a huge pain to setup before each test.

    At Home

    In your personal projects, you’re the only person working on it. If you stop, it’s more likely to be abandoned than passed to another developer. It’s hard to justify tests if the project might not last a year.

    Unless you’re a pro at project management and have strict requirements, your side projects are going to have higher code churn than at work. How can you justify writing acceptance tests for a web page whose layout is constantly being tweaked?

    Test as much as you can, but don’t beat yourself up over it.

  • Rules for the Internet of Things

    My dream for the internet of things is a bunch of different devices coordinating with each other. My air conditioner, humidifier, and dehumidifier should all work together to keep my apartment climate controlled and prevent me from ever having dry, cracked hands ever again. Having connected devices work together feels so far away. As we baby step towards my dream, here are some rules all internet of things devices should follow.

    Any services need to last as long as a dumb version of the device would

    Washing machines and refrigerators can last ten to twenty years. Your 1985 Nintendo probably still works. Electrical components don’t degrade like mechanical components do, so the internet of things devices need to last at least as long as their mechanical counterpart.

    Devices should still work without internet

    All devices need manual controls. If the internet is out or if a storm is making the internet unreliable, the smart device should still be useable. Your Juicero doesn’t need to connect to the internet to squeeze a packet of juice.

    Devices need to be repairable or modular

    If the wifi on my refrigerator breaks, I can’t take it into an Apple store without renting a truck. Even worse, I probably bought an LG fridge, and LG doesn’t have stores at the local mall. The smart parts of appliances need to be replaceable by the owner or be a separate module from the appliance. Also, if I detach a smart module from my refrigerator, the refrigerator should still work.

    Devices should be useable while updates are installing

    Servers store two versions of their firmware, the latest update and the previous update. When you install a firmware update, the server overwrites the older update and boots from that location.

    Services need to be open source

    To ensure it lasts, the devices, their api, and the hubs that control them need to be open source. Ideally all companies would open source their device software when that device reaches end of life. That’s never happened, so we need internet of things devices to be open source from the start. If you can’t recreate a device’s server features in AWS, it’s worthless.

    Devices need to work with multiple brands of hubs

    We can’t have an app for each device. Devices need an easy to use API and support for the most common hubs.

    They need to be secure (No default passwords)

    Configuring a smart device should require pairing with the user’s computer or phone. If the device requires connecting to a service, make the user create an account or tie the account to the phone app. Unconfigured devices shouldn’t be allowed access outside the user’s home network.

    Stop with the worthless metrics

    Internet of things devices need to improve your life instead of measuring it. I don’t need to know how much water I drink every day. I don’t need a smart pillow or a smart bed. Not now. Not ever.

    Not everything needs to be connected. Somethings can be dumb.

    Design for our wired future

    It’s maddening to have expensive light bulbs with wireless chips instead of expensive lamps and ceiling fans and cheap led bulbs. I get that it is easier to convince people to try out smart light bulbs when they don’t need to rewire their home, but the market has been proven. People love programmable light bulbs that can change color. Light bulbs are going to burn out. They shouldn’t be expensive. They shouldn’t be wireless.

    We don’t need more wireless things. All of our wall outlets are going to be USB-C someday. Someone needs to start building the products that take advantage of our fast, low powered, wired future.

  • The Dream of Tech Company Culture Is a Lie

    The power is out. I want to write a blog post, but the only thing worth talking about this week is Susan J. Fowler’s story about her awful year at Uber.

    My friends will share her story commenting that “if this is true, it’s damning.”

    *sigh*

    I’m convinced it’s true.

    Uber has a history of bad behavior and misogyny, and then Uber’s head of HR left to join Twitter which doesn’t make a lot of sense.

    Even at our favorite companies senior managers will harass female employees and those companies’ employees eventually find out that HR exists to protect the company from liability and not to help them.

  • Jiro Dreams of Sushi

    The absolute opposites of an entitled employee are the apprentices in Jiro Dreams of Sushi. The apprentices have to work for ten years making rice and doing other menial tasks before they’re allowed to make a dish. Jiro’s desire for perfection really shows his dedication to his craft.

    The film paints a portrait of an exacting patriarch who demands perfection from himself, his sons, and the hard-working apprentices who work up to 10 years before being allowed to cook eggs.

    Influence Film Forum

    Although… why would anyone put up with that? If you were even a mediocre chef, why would you toil under Jiro instead of creating your own restaurant or being a more senior chef at another restaurant? Only two types of employees will apprentice with Jiro, the sushi fanatics and the people who can’t find a job anywhere else.

    Sam Altman wants employees who have no where else to go.