Being Reminded Rather Than Informed

by Teja Yenamandra on Jan 6, 2017.
_________________

We need to be reminded more often than informed.

I stumbled across this aphorism in Marcus Aurelius’ Meditations several months ago.

This past year, I read over a book per week. Impressive, I guess, but valuable? I’m not really sure.

I’ve realized that I’ve been treating the accumulation of knowledge almost like a shopaholic would shopping. It’s consumption, but just in a different form. Knowledge is power, but applied knowledge is way more power. And probably more useful, too.

In December, I constructed notes from 4 books that I really want to apply in my day-to-day life and management of our business over this year. I have been reviewing my notes from these books for a few minutes every single morning. I plan to re-read these books every quarter. I want the knowledge from this particular books to become a core part of my personal operating system through this year.

It struck me that we should bring a similar practice into our business. Many moons ago, my leadership team and I devised several core values for our business. They are great core values. They encapsulate the very best of our business. However, we review them once a quarter. To actually live these core values, we ought to raise them weekly, if not daily. In fact, I’m not entirely sure anybody but us is aware of them.

Normal blog etiquette demands that I end with a prescription. But, I have none. Maybe I have a challenge, and that’s, if you’re a voracious reader, fight against your natural tendencies. Pick a few books that you really love and spend the next year understanding them inside and out.



One Time I Flew to Washington, DC For a Day

by Teja Yenamandra on Dec 31, 2016.
_________________

As many of you know, we run a fully remote business. That means we have folks in nearly every major US city, and in some major international ones as well.

In practice, this means we do most business virtually — including even extending job offers for core roles. However, this past December, we did something a bit differently.

There was a particular someone who was quite simply kicking ass as a contractor, and we wanted to bring her on in a full-time capacity. Unfortunately for us, she lived in a few hours outside of any easy-to-access airport. Normally, I’d give her a call. Instead, I flew to her city, grabbed a rental, drove two hours, ate lunch with her, extended an offer, and flew back the next day. All for a 45-minute meeting.

It was so worth it.

This overture no doubt crushed my overall work output for 2 days. However, I traded my personal output to improve the organization’s overall output for the foreseeable future. Even if you run a remote business, some things are important to do face-to-face. And even if you have a shit ton of things to do, sometimes the right move is not doing any of it, and instead, ensure you're taking care of people.



How Can I Reduce my Stripe Processing Fees? Pt. 1

by Tyler Newkirk on Oct 10, 2016.
_________________

If your business is like Gun.io, you do a lot of charging clients for services. Waiting for checks in the mail is so 2005, so you probably accept credit cards, and maybe even ACH. There are tons of blog posts comparing different payment services, like Paypal vs. Stripe, etc. We’ll leave that topic alone and say we chose Stripe. Assuming you did, too, our VP of Operations, Tyler Newkirk picked up the trail to determine...

How can I reduce my Stripe processing fees?

I was recently re-introduced to the idea of leveraging Stripe’s volume discount for our business by a savvy client of ours (check out AdvicePay, a FinTech company founded by Alan Moore and Michael Kitces). This wasn’t the first time I’d heard about Stripe’s willingness to work with larger clients, but I had yet to take the time to dive in and explore the potential benefits for which we might be eligible. Due to recent growth and a commitment to a lean operating structure, I deemed it time to give Stripe a ring.

Only I couldn't, because Stripe doesn’t list a public phone number for their sales or support teams. Bummer - be prepared for a semi-lengthy email discourse.

Before contacting Stripe’s sales team however, I spent a few hours digging online to see if I could solve any of these inquiries. Unfortunately I was unable to discover much more than a few vague pointers here and there. As a result, I wanted to record my findings and provide a resource for anyone else looking to understand Stripe’s volume discount.

I wanted Stripe to answer the following questions:

1. What criteria does Stripe use to determine volume discount eligibility?

2. Within that criteria, what level of volume is required for Stripe's discount?

3. How is Stripe's volume discount applied in practice?

In this three-part series I will explore these questions and the answers I was able to uncover.


Part 1

What criteria does Stripe use to determine volume discount eligibility?

Searching for the appropriate contact information, I found a sales contact form at https://stripe.com/contact/sales.  Encouraged by the fact that this form seemed specifically geared toward volume discount requests, I entered the required details and sent the form on its way, hoping for the best.

Within 24 hours I had an initial response, courtesy of a helpful Sam from Sales. Right off the bat he disclosed that Stripe typically only considers volume discounts for accounts “consistently” doing over $80,000 USD in transactions per month - with the caveat that account eligibility may still vary due to different characteristics unique to each business. He clarified:

Our pricing is largely driven by our underlying costs for processing your transactions which, as you can imagine, varies quite a bit from user to user. These costs are highly contingent on the business itself and the customer profile. Having lots of American Express, international and corporate customers will cause the rate to skew higher; having mainly domestic debit cards will keep the rate low.

Of note: Stripe actually publishes a figure much lower than Sam’s stated $80K on their own pricing page, as section that seems to appear sporadically based on some kind of split-testing:

Image4.png

I captured this image from their site on 9/22/16, so the exchange rate at the time of writing this article puts €30,000 at roughly $33,500. At no time even in the past 5 years has the exchange rate been anywhere close to convert €30K to $80K. Misleading, to say the least.  

With respect to Sam’s reply, although still somewhat opaque, I considered his admission a valuable one. In reading the remainder of his email though, I became increasingly confused as I realized Sam’s implication that our monthly transaction volume fell short of the $80,000 threshold. Confident that our business “consistently" exceeded this threshold (by a lot), I went to back into our Stripe account to investigate and gather concise numbers before responding.

Looking at our Stripe dashboard, I double-checked our figures. I played with the dates, creating a list of monthly transaction volume totals for the entire YTD.

Image1.png

I wrote back Sam from Sales, detailing our monthly figures for the past few quarters as well as some growth statistics to bolster the argument for our volume discount approval (and to learn how flexible Stripe might be with respect to other compelling statistics outside of the $80K/mo requirement). Feeling confident in my rebuttal, I anxiously awaited Sam’s response.

Arriving only 3 hours later, Sam courteously replied with his own (albeit significantly lower) monthly transaction figures, stating that based on his findings, we weren’t yet achieving the $80,000 monthly transaction volume floor to be considered for the volume discount. Frustrated, I read on - until I caught a phrase that Sam had used to describe his monthly totals (my emphasis added):

Please note that according to your recent net processing, it doesn't appear that Gun.io is exceeding the $80k / month minimum threshold for a potential pricing review of your account.

How does Stripe calculate “Net Processing”?

You saw it coming, and there it was - the crux of the issue. It now seemed obvious that we were referencing different equations to arrive at our monthly totals, yet I wasn’t sure where this ‘Net Processing’ total resided. A bit embarrassed by my ignorance and apparent unpreparedness for this discussion with Sam, I went back online to search for the source of his ‘Net Processing’ numbers before responding once more.

After minimal hunting, I learned that Stripe prepares a downloadable financial report for each user, housed at https://dashboard.stripe.com/account/data. The ‘Download Report…’ button will create and download a .csv file of your account’s history for your eternal viewing pleasure.

Image2.png

Looking at the report in Excel, I began to play with the different categories of numbers to work backwards into the monthly ‘Net Processing’ totals that Sam provided in his last email (annoyingly, there is no ’Net Processing’ label anywhere in the report).  It took a bit of guess-and-check, but I eventually realized that his numbers were the result of taking strictly 'Gross Amount' of 'Sales' (which I believe to be purely sales via CC's) and adding the negative 'Gross Amount' from 'Refunds' (if applicable). Eureka! See some example figures below:

Image3.png

The undimmed area provides the general reference of where to view the relevant information; the red boxes delineate the exact rows to sum for each month in order to find the ‘Net Processing’ amount. Per the sample image, the Net Processing Total equation for August ('8/1/16’ in cell D1) goes like this: $170,000 (Gross Amount, Sales) plus -$8,000 (Gross Amount, Refunds) equals $162,000 ('Net Processing Total').

To explain further - if Stripe were to consider this report for the volume discount, it would determine that August realized $162,000 in transaction volume; $21,500 in July; $86,500 in June. Despite the two months above their stated $80K threshold, I’d suspect that the variability in this example would be too great to approve a volume discount. Armed with new comprehension, I now understood how Sam (and thus, Stripe) calculated the monthly transaction volume (i.e. ’Net Processing’) numbers for our business.

Additional side note: the transaction amount detailed within your dashboard is the sum of both card and ACH transactions, gross of any refunds. For the sake of reducing your fee exposure you need to net out ACH like we did above.

tl;dr

Question 1: What criteria does Stripe use to determine volume discount eligibility?

Answer: Monthly ‘Net Processing’ totals, calculated by the sum of 'Gross Amount of Sales’ and ‘Gross Amount of Refunds’.

In Part 2 I’ll share what I learned about the actual volume of sales necessary to get Stripe to bite on a volume discount for credit card processing fees.




Building Serverless Microservices with Zappa and Flask

by Rich Jones on Mar 29, 2016.
_________________

Today, I'm going to show you how to write and deploy serverless microservices using Flask and Zappa. If you're new to Flask, you'll see just how easy is. However, if you prefer Pyramid, Bottle, or even Django, you're in luck, because Zappa works with any WSGI-compatible framework!

Zappa is super, super easy.

With serverless deployments, the web application only exists during the span of a single HTTP request. The benefit of this is that there's no configuration required, no server maintenance, no need for load balancers, and no cost of keeping a server online 24/7. Plus, it's incredibly easy and fun!

This demonstration will start with the most trivial example, and build to an example of a nearly-useful image thumbnailing service. In the next part, we'll look at setting up our service on a domain with an free SSL certificate using Let's Encrypt.

The Simplest Example

Before you begin, make sure you have a valid AWS account, your AWS credentials file is properly installed.

Flask makes it super easy to write simple web services and APIs, and Zappa makes it trivially easy to deploy them in a serverless way to AWS Lambda and AWS API Gateway. The simplest example even fits in a single .gif!

First, you'll need to set up your "virtual environment" and install Flask and Zappa into it, like so:

$ virtualenv env
$ source env/bin/activate
$ pip install flask zappa

Now, we're ready to make our application. Open a file called my_app.py and write this into it:

from flask import Flask
app = Flask(__name__)

@app.route('/')
def index():
    return "Hello, world!", 200

# We only need this for local development.
if __name__ == '__main__':
    app.run()

The code is basically self-explanatory. We make a Flask object, use the 'route' decorator functions to define our paths, and call a 'run' function when we run it locally (which you can confirm by calling python app.py and visiting localhost:5000 in your browser.)

Okay, so now let's deploy! Open a new file called zappa_settings.json where we'll load in our Zappa configuration.

{
    "dev": {
        "s3_bucket": "your_s3_bucket",
        "app_function": "my_app.app"
    }
}

This defines an environment called 'dev' (later, you may want to add 'staging' and 'production' environments as well), defines the name of the S3 bucket we'll be deploying to, and points Zappa to a WSGI-compatible function, in this case, our Flask app object.

Now, we're ready to deploy. It's as simple as:

$ zappa deploy dev

And our serverless microservice is alive! How cool is that?!

Adding File Uploads

Okay, now let's make our application a bit more interesting by turning it into a thumbnailing service. We'll take an uploaded image, cut a thumbnail, and host the thumbnail on S3.

First, we'll need to add a few more packages from pip:

$ pip install boto3 Pillow

You used to have to manually compile PIL/Pillow if you wanted to use it on AWS Lambda, but since Zappa automatically uses Lambda-compatible packages via lambda-packages, we don't have to worry about.

Then, we'll update our code to add new imports:

import base64
import boto3
import calendar
import io

from datetime import datetime, timedelta
from flask import Flask, request, render_template
from PIL import Image

s3 = boto3.resource('s3')
BUCKET_NAME = 'your_public_s3_bucket'

and a new route:

@app.route('/upload', methods=['GET', 'POST'])
def upload_file():
    if request.method == 'POST':
        new_file_b64 = request.form['b64file']
        if new_file_b64:

            # Decode the image
            new_file = base64.b64decode(new_file_b64)

            # Crop the Image
            img = Image.open(io.BytesIO(new_file))
            img.thumbnail((200, 200))

            # Tag this filename with an expiry time
            future = datetime.utcnow() + timedelta(days=10)
            timestamp = str(calendar.timegm(future.timetuple()))
            filename = "thumb.%s.jpg" % timestamp

            # Send the Bytes to S3
            img_bytes = io.BytesIO()
            img.save(img_bytes, format='JPEG')
            s3_object = s3.Object(BUCKET_NAME, filename)
            resp = s3_object.put(
                Body=img_bytes.getvalue(),
                ContentType='image/jpeg'
                )

            if resp['ResponseMetadata']['HTTPStatusCode'] == 200:

                # Make the result public
                object_acl = s3_object.Acl()
                response = object_acl.put(
                    ACL='public-read')

                # And return the URL
                object_url = "https://{0}.s3.amazonaws.com/{1}".format(
                    BUCKET_NAME,
                    filename)
                return object_url, 200
            else:
                return "Something went wrong :(", 400

    return render_template('upload.html')

You'll notice that we're also using "render_template" now, so download this template as a file called 'upload.html' in a 'templates' directory within your project.

The other thing to notice here is that we're base64 encoding our binary data on the client, then decoding it server-side. AWS API Gateway can't yet handle binary data through the Gateway, so we have to encode it for now. Quite frankly, you probably don't want your data to go through API Gateway anyway, and you should just upload directly to S3 and then pass the key name to your service, but both ways work and it's easy enough to get our data for this example.

Finally, you'll also notice that we're appending an 'expiry' time into the filename of our thumbnail. Because our service is completely serverless, we don't use a database. So, we'll have to use other ways of storing information. If this service was part of a larger microservice deployment, we probably wouldn't worry about this here, but since we're still standing alone, we have to use what resources we have available to us for storing little bits of data that we might want. So, filenames and 'Tags' on S3 objects are super useful to us!

Now, you just have to update your code:

$ zappa update dev

And you've got a serverless thumbnailing service! Hooray! Browse to https://{{your_apigw_path}}/dev/upload to try it out!

Wrapping Up

So, now we've seen how trivially easy it is to build serverless microservices with Flask and Zappa. This combo is great for image processing services, text-processing, number-crunching, or even hosting fairly complex web applications if that's your goal.

In the next guide, we'll learn how to deploy our microservice to a subdomain with a free SSL certificate using Let'sEncrypt. Stay tuned!



Announcing Zappa - Serverless Python Web Applications

by Rich Jones on Feb 8, 2016.
_________________

Today, I'm pleased to announce the first major release of Zappa - a system for running "serverless" Python web applications using AWS Lambda and AWS API Gateway. Zappa handles all of the configuration and deployment automatically - now, you can deploy an infinitely scalable application to the cloud with a single command - all for a minute fraction of the cost of a traditional web server.

You can see at the core library on Github here, or the first major client library, django-zappa, here. I have also produced a screencast here if you want to follow along while trying it out.

Serverless?

I've used the word "serverless" to describe Zappa - but what does that actually mean? Obviously, it's not completely serverless - there still is a machine returning the HTTP response to the client.

The difference is that the server's entire lifespan exists within the lifecycle of a given HTTP request. Where normal web servers like Apache and Nginx have to sit idle 24/7, waiting for new requests to come in, with Zappa, the server is created after the HTTP request comes in through API Gateway. It then turns the API Gateway request into normal Python WSGI, processes the request, and returns it back through the API Gateway to the client. And then, poof - the server is gone.

Advantages

Scalability

This comes with some major advantages over traditional web servers. The first is scalability. Because AWS Lambda handles all of the requests, you can have as many responses processed in parallel as you need. With AWS Lambda, you get 100 function executions per second right out of the box, but the limit is arbitrary and if you scale beyond that you only need to ask Amazon to raise your limit.

Cost

The next major advantage is cost. With AWS Lambda, you pay by the millisecond. So rather than paying to have a beefy EC2 machine running 24/7 for your website, you only pay based on the amount of requests you server - which typically means you'll only be paying pennies per month for an ordinary website. Not to mention the cost saving on not having to spend time on deployment, operations and maintenance!

Maintainability and Ease of Use

Zappa is also incredibly easy to deploy. It's literally a single command - python manage.py deploy production - to configure and deploy your code, and after that, you never have to worry about it again. No provisioning machines, no setting up web servers, no DevOps, no operating systems, no security upgrades, no patching, no downtime. It just works!

Hacks

AWS Lambda and API Gateway are very new technologies, so there are a quite a few hacks that make this all possible. Those include, but aren't limited to:

  • - Using VTL to map body, headers, method, params and query strings into JSON, and then turning that into valid WSGI.
  • - Attaching response codes to response bodies, Base64 encoding the whole thing, using that as a regex to route the response code, decoding the body in VTL, and mapping the response body to that.
  • - Packing and Base58 encoding multiple cookies into a single cookie because we can only map one of a kind.
  • - Turning cookie-setting 301/302 responses into 200 responses with HTML redirects, because we have no way to set headers on redirects.

If you want to learn more, take a look under the hood!

Future Work

Though Zappa is now feature-complete enough for an initial release, there is still a fair amount of work to do. For instance, there is only one client library so far, django-zappa, but it should be fairly easy to add support for Flask, Pylons, and any other WSGI python web framework. The same principles that make Zappa possible should also work for NodeJS applications, but I find it's much more comfortable to develop in Python.

Try it out!

So give it a shot! It'll seriously change the way you think about deploying web applications. (If you just want to see a page served by Zappa, check out this little hello-world page with a self-signed certificate - real website coming soon!)

The easiest way to get started using Zappa is either going to be to read the Django Zappa documentation or to watch the introductory screencast.

If you're interested in contributing, you can also check out the code from GitHub and start submitting pull requests!

Enjoy!



Learn and Earn!

Sign up for great tutorials, guides, rants, raves and opportunities to earn more money!