Semaphore Blog

News and updates from your friendly continuous integration and deployment service.

Building Pull Requests From Forked Repos

Get future posts like this one in your inbox.

Follow us on

We are glad to announce that Semaphore now supports building pull requests from forked repositories.

So far, in order to see pull request status you had to have the forked repository on Semaphore, along with the parent repo. However, GitHub recently removed the ability to push commit status from forked to parent repo and therefore we had to do some work to make our system better.

With this new feature, forked repository does not have to be on Semaphore and still, all its’ pull requests towards the parent repository will be tested and will have build status posted to the corresponding pull request page on GitHub:

If a new commit gets pushed to pull request, it will be tested as well.

We have updated Semaphore’s dashboard to include incoming pull requests and their status. Pull request builds will be shown along other branches, with a slightly different name format:

As with feature branches, once the pull request is merged, it will be removed from your dashboard. Same goes with the closing of the pull request.

Clojure on Semaphore

We’re very excited to announce official support for testing and deploying Clojure projects. Clojure is the third official language supported by Semaphore.

Continuous integration for Clojure projects in 1 minute

The steps for setting up your Clojure project on Semaphore are as simple as for a Ruby or JavaScript project.

If you don’t already have a Semaphore account, sign up for a free 30-day trial for unlimited builds, unlimited number of collaborators, unlimited deployment and two processors for parallel testing.

New accounts will need to authorize Semaphore to access GitHub repositories, in order to be able to pull and receive notifications about code changes, post commit status in pull requests etc.

On your dashboard, click on “Build a New project”. Semaphore will fetch basic information about projects that you have access to on GitHub and ask you to select one.

Semaphore will now analyze your repository and recognize that it is a Clojure project, presetting lein test as your build command. Leiningen 2.3.4 and OpenJDK 7 are preinstalled on the platform.

What remains is to optionally invite your collaborators to Semaphore and – launch the first build. From that point, Semaphore will automatically test your project after every push to your Git repository.

After the first build, your dependencies will be cached for later reuse.

If you are developing a project that works with a database, see our documentation page about database access for information about the engines available and required credentials.

Why Clojure

Clojure is a dynamic, general-purpose programming language that runs on the Java Virtual Machine. It is a dialect of Lisp, sharing the code-as-data philosophy, support for macros and functional programming. It provides immutable, persistent data structures and is designed to support high-performance, concurrent applications.

The heart of Clojure is simplicity. Bottom line is, all code is in small functions grouped in namespaces whose scope of effect is very limited and understandable. This gives you confidence and leads to easy composition of higher-order systems. It does not take long to learn the syntax of the language; the challenge lies in changing the traditional object-oriented mindset used to mutability and side-effects and working with a different set of programming abstractions.

These characteristics sound a lot different if the last language you’ve studied was something like Ruby or Python. You cannot write imperative code in Clojure, design patterns go out of the window, concurrency is easy to implement and reason about and unless you really try you can’t create global, mutable state. There are many use cases where this is a blessing; so far most of them are on the axis between complex systems and data analysis.

Clojure adoption is on the rise, people at conferences are beginning to talk about replacing “legacy Ruby apps” and quite a few smart and experienced engineers are recommending it as the next language developers should master.

Useful resources for learning Clojure

Tutorials

Presentations

Books

When you’re wondering which tools are available for some job, check out Clojure Toolbox.

Webhooks API and More Updates

Our goal is to let our users fit Semaphore to a wide range of workflows, while keeping the core product simple to use. The API is a big part of that, and today I would like to announce some improvements we’ve made.

First, we added some useful information to existing API endpoints:

  • There is now a html_url for projects on Semaphore in Project API response.
  • Webhook payload is richer for event (which can be “build” or “deploy”) and project_hash_id, a unique identifier of the project on which the event occurred.

Second, bigger and more interesting thing to mention is the new webhooks API. Now you can easily create, update and delete webhooks via API, which makes it easy to automate your favourite Semaphore integration.

You can always find the latest API documentation on our documentation site.

We are beginning to plan the next version of Semaphore API and we’re ready to hear any suggestions about features you would like to see. And of course about new use cases or tools that you would like to integrate. Just write a comment below or get in touch via our standard support channel.

Happy integrating!

Rails Testing Antipatterns: Controllers

This is the third post in the series about antipatterns in testing Rails applications. See part 1 for thoughts on fixtures and factories, and part 2 on models.

Skipping controller tests

When starting out with Rails, with all the “magic” going on it can be easy to forget that controllers are just classes that need tests as well. If it seems to be hard to write a test for a controller, you should embrace the pain; it is telling you that your controller is doing too much and should be refactored.

A controller handles the request and prepares a response. So it shouldn’t be subscribing a user to a newsletter or preparing an order. In general we’ve found that controllers should be limited to:

  • Extracting parameters;
  • Checking for permissions, if necessary;
  • Directly calling one thing from the inside of the app;
  • Handling any errors, if they affect the response;
  • Rendering a response or redirecting.

I’ve found that controllers are best tested with a mocking approach, since they tie many things together. The freedom you get from mocking should be used as an advantage in molding the controller method’s code to an ideal shape.

Creating records in controller tests

Because Rails controllers should be designed as “callers” of logic defined - and tested - elsewhere, it is pure overhead to involve the database in controller tests. Use mocks and test doubles to decouple from other application layers and stay focused on the task of specifying the functional behavior of the controller.

This is not really a controller spec but a semi-integration test:

describe "GET show" do
  let(:user) { create(:user) }
  let(:report) { create(:report, :user => user) }

  it "fetches user's report" do
    user_id = user.id
    report_file = report
    get :show, :user_id => user_id, :id => report_file.id
    assigns(:report).id.should eql(report.id)
    response.status.should eql(200)
  end
end

The code above is more like an attempt to write a passable test for an existing implementation. Contrast that with writing a test first, thinking only about interfaces, one assertion at a time, without interacting with the database:

describe "GET show" do
  let(:user) { build_stubbed(User) }
  let(:report) { build_stubbed(Report) }

  before do
    User.stub(:find) { user }
    user.stub(:find_report) { report }
  end

  describe "GET show" do
    it "finds the user's report" do
      user.should_receive(:find_report) { report }

      get :show, :user_id => user.id, :id => report.id
    end

    it "assigns a report" do
      get :show, :user_id => user.id, :id => report.id

      assigns(:report).should eql(report)
    end

    it "renders the 'show' template" do
      get :show, :user_id => user.id, :id => report.id

      response.should render_template("show")
      response.code.should eql(200)
    end
  end
end

Sometimes however, data is sensitive enough and goes through some kind of an API to warrant an integration test with real records. This is a use case for request specs.

# spec/requests/login_attempts_spec.rb
require "spec_helper"

describe "login attempts" do

  context "failed" do

    it "records a new failed login attempt" do
      expect {
        post user_session_path,
          :user => { :email => "user@example.com", :password => "badpassword" }
      }.to change(LoginAttempt, :count).by(1)
    end
  end
end

No controller tests, only integration

One might argue that it is not necessary to test controllers when there are comprehensive integration tests.

In practice, creating an integration test for every possible edge case is much harder and never really done. Integration tests are also much slower, because they always work with the database and may involve UI testing. Controller specs that use mocks and test doubles are extremely fast.

Complex controller spec setup

Epic setup is generally a code smell, telling us that our class is doing too much. See this context for verifying that a welcome email for a newly registered user goes out:

describe UsersController do

  describe "POST create" do

    context "on successful save" do

      before do
        @user = double(User)
        @user.stub(:save).and_return(true)

        User.stub(:new) { @user }

        @mailer = double(UserMailer)
        UserMailer.stub(:welcome) { @mailer }
        @mailer.stub(:deliver)

        ReferralConnector.stub(:execute)
        NewsletterSignupService.stub(:execute)
      end

      it "sends a welcome email" do
        @mailer.should_receive(:deliver)

        post :create, :user => { :email => "jo@example.com" }
      end
    end
  end
end

Difficult tests always go hand in hand with a problematic implementation. The spec above is for a controller that looks something like this:

class UsersController

  def create
    @user = User.new(user_params)

    if @user.save
      UserMailer.welcome(@user).deliver
      ReferralConnector.execute(@user)
      NewsletterSignupService.execute(email, request.ip)

      redirect_to dashboard_path
    else
      render :new, :error => "Registration failed"
    end
  end

  private

  def user_params
    params[:user].permit(*User::ACCESSIBLE_ATTRIBUTES)
  end
end

A developer following the BDD approach and thinking a bit in advance could write a spec for a method handling user registration so that all post-signup things, whatever they may be, are delegated to a PostSignup class, which can be tested in isolation.

describe "POST create" do

  context "on successful save" do

    before do
      User.any_instance.stub(:save) { true }
      PostSignup.any_instance.stub(:run)
    end

    it "runs the post signup process" do
      PostSignup.should_receive(:new) { post_signup }
      post_signup.should_receive(:run)

      post :create
    end

    it "redirects to dashboard" do
      post :create
      response.should redirect_to(dashboard_path)
    end
  end
end

Hands-free deployment of static websites

technical drawing

Static websites seem to be going through a period of renaissance. About fifteen years ago, webmasters (the people who you could always contact) manually managed all HTML files for a website. Due to lots of duplicate content, “Find & Replace in Files” was the hottest feature in text editors. Version control was mostly unheard of and updates were done over FTP.

In the mid-2000s, the most commmon way for creating a simple website that includes some pages, news, blog etc was to use a CMS. Thus maintaining every one of these websites included managing upgrades of the CMS as well as the database which was driving the content.

When Tom Preston-Werner launched Jekyll, a static site generator, in 2008 many programmers started porting their blogs and personal websites to it. It became obvious then: if all my content is text, let’s generate it from some readable set of files, written in my favorite editor, tracked in version control. Around the same time, the browser became an alternative operating system with a rich API and new capabilities in scripting and rendering. Today many web development and design shops use static site generators to create websites that do not have the complex requirements like a web application.

Deploying Semaphore documentation

Two important parts of Semaphore are made with Middleman, another static site generator: this blog and our documentation site. I’ll show you how we use Semaphore to automatically publish new content.

The Git repository is open source and on GitHub. We’re using the S3Sync plugin to deploy the generated site to Amazon S3.

We’ve added the project on Semaphore to build with the following commands:

bundle install --path .bundle
bundle exec middleman build
bundle exec middleman s3_sync

To make the S3 plugin work, we also need valid S3 credentials. We do not store that file in the repository, but have instead defined a custom configuration file on Semaphore which becomes available during the build:

File with sensitive credentials which Semaphore will insert during the build.

Now I can either edit the site files locally and git push, or use GitHub’s web interface to create and edit files. Either way, Semaphore will run the commands above after every push and make sure that the latest content is continuously deployed to semaphorapp.com/docs.

In this case there isn’t a test command that makes sense so we do the deployment in builds. In cases where you can separate the build and deploy steps, you may find Semaphore’s “Generic” deploy strategy useful, a small feature we launched today.

The Generic strategy lets you specify any deployment steps to run after successful builds, with a deploy SSH key being completely optional.

Announcing Support for JavaScript and Node.js Projects on Semaphore

We are very excited to announce that Semaphore now has full support for testing and deploying JavaScript projects via Node.js.

Semaphore started out with the goal to provide the best CI experience for private Ruby projects. Over time we’ve been gradually making it possible to work with projects in other languages by expanding our build platform. The web browser is the most popular platform today and the sheer number of projects in JavaScript on GitHub reflects that, as well as the number of requests by our existing users. Many were already testing their JavaScript projects on Semaphore, so it was a logical next step for us to upgrade its’ status.

Setting up your Node.js project

The steps for setting up your Node.js project on Semaphore are as simple as for setting up Ruby a project.

During the analysis of your repository Semaphore will recognize that it is a JavaScript project and preselect a version of Node.js to be used in builds. You can still change the target language and version manually of course. The suggested build commands will reflect your selection.

Semaphore can autoconfigure your JavaScript project for continuous integration.

In the build environment Semaphore has the Node Package Manager preinstalled, so you can easily install your dependencies specified in package.json, or install new global packages by using npm, for example:

npm install <some_cool_node_package>

If you do not have a Semaphore account, sign up and add your first JavaScript project today.

Available versions

Semaphore’s current build platform has the following versions of Node.js preinstalled:

  • 0.8.26
  • 0.10.24 (latest stable version)
  • 0.11.10 (latest unstable version)

For managing versions we use Node Version Manager. For easy switching, we have set the corresponding version aliases as:

  • 0.8 => 0.8.26
  • 0.10 => 0.10.24
  • 0.11.10 => 0.11

If you want to use another version, you can simply add a build setup command to install it:

nvm install <desired_node_version>

Semaphore also makes the following tools available:

Database access

If you are developing a Node.js project that works with a database, see our refreshed Database access documentation page for information about the engines available and required credentials.

For Rails apps with generate a working config/database.yml based on the selected database in project settings. We are open to identifying a similar common use case for Node.js apps. So if you have a Node.js project and work with a database, feel free to get in touch and tell us how you are configuring it.

Rails Testing Antipatterns: Models

This is the second post in the series about antipatterns in testing Rails applications. See part 1 for thoughts related to fixtures and factories.

Creating records when building would also work

It is common sense to say that with plain ActiveRecord or a factory library you can both create and only instantiate new records. However, probably because testing models so often needs records to be present in the database, I have seen myself and others sometimes forget the build-only option. It is however one of the key ways to making your test suite faster.

Consider this spec for full_name:

describe User do
  describe "#full_name" do

    before do
      @user = User.create(:first_name => "Johnny", :last_name => "Bravo")
    end

    it "is composed of first and last name" do
      @user.full_name.should eql("Johnny Bravo")
    end
  end
end

It triggers an interaction with database and disk drive to test logic which does not require it at all. Using User.new would suffice. Multiply that by the number of examples reusing that before block and all similar occurrences and you get a lot of overhead.

All model, no unit tests

MVC plus services is good enough for typical Rails apps when they’re starting out. Over time however the application can benefit from some domain specific classes and modules. Having all logic in models ties it to the database, eventually leading to tests so slow that you avoid running them.

If your experience with testing started with Rails, you may be associating unit tests with Rails models. However in “classical” testing terminology, unit tests are considered to be mostly tests of simple, “plain-old” objects. As described by the test pyramid metaphore, unit tests should be by far the most frequent and the fastest to run.

Not having any unit tests in a large application is not technically an antipattern in testing, but an indicator of a possible architectural problem.

Take for example a simple shopping cart model:

class ShoppingCart < ActiveRecord::Base
  has_many :cart_items

  def total_discount
    cart_items.collect(&:discount).sum
  end
end

The test would go something like this:

require "spec_helper"

describe ShoppingCart do

  describe "total_discount" do

    before do
      @shopping_cart = ShoppingCart.create

      @shopping_cart.cart_items.create(:discount => 10)
      @shopping_cart.cart_items.create(:discount => 20)
    end

    it "returns the sum of all items' discounts" do
      @shopping_cart.total_discount.should eql(30)
    end
  end
end

Observe how the logic of ShoppingCart#total_discount actually doesn’t care where cart_items came from. They just need to be present and have a discount attribute.

We can refactor it to a module and put it in lib/. That would be a good first pass — merely moving a piece of code to a new file has benefits. I personally prefer to use modules only when I see behavior which will be shared by more than one class. So let’s try composition. Note that in real life you usually have more methods that can be grouped together and placed in a more sophisticated directory structure.

# lib/shopping_cart_calculator.rb
class ShoppingCartCalculator

  def initialize(cart)
    @cart = cart
  end

  def total_discount
    @cart.cart_items.collect(&:discount).inject(:+)
  end
end

To test the logic defined in ShoppingCartCalculator, we now no longer need to work with the database, or even Rails at all:

require 'shopping_cart_calculator'

describe ShoppingCartCalculator do

  describe "#total_discount" do

    before do
      @cart = double
      cart_items = [stub(:discount => 10),
                    stub(:discount => 20)]
      @cart.stub(:cart_items) { cart_items }

      @calculator = ShoppingCartCalculator.new(@cart)
    end

    it "returns the sum of all cart items' discounts" do
      @calculator.total_discount.should eql(30)
    end
  end
end

Notice how the spec for the calculator does not need require spec_helper any more. Requiring spec_helper in a spec file generally means that your whole Rails application needs to load before the first example runs. Depending on your machine and application size, running a single spec may then take from a few to 30+ seconds. This is avoided with use of application preloading tools such as Spork or Spring. It is nice however if you can achieve indepedence of them.

To get more in-depth with the idea I recommend watching the Fast Rails Tests talk by Corey Haines. Code Climate has a good post on practical ways to decompose fat ActiveRecord models.

Note that new Rails 4 apps are encouraged to use concerns, which are a handy way of extracting model code that DHH recommended a while ago.

The elegance of entire suite being extremely fast, not requiring spec_helper etc is secondary in my opinion. There are many projects that can benefit from this technique even partially applied, because they have all logic in models, as in thousands of lines of code that can be extracted into separate modules or classes. Also, do not interpret the isolated example above as a call to remove all methods from models in projects of all size. Develop and use your own sense when it is a good moment to perform such architecture changes.

Using let to set context

Using let(:model) (from RSpec) in model tests may lead to unexpected results. let is lazy, so it doesn’t execute the provided block until you use the referenced variable further in your test. This can lead to errors which can make you think that there is something wrong with the database, but of course it is not; the data is not there because the let block has not been executed yet. Non-lazy let! is an alternative, but it’s not worth the trouble. Simply use before blocks to initialize data.

describe ShoppingCart do

  describe ".empty_carts" do
    let(:shopping_cart) { ShoppingCart.create }

    it "returns all empty carts" do
      # fails because let is never executed
      ShoppingCart.empty_carts.should have(1).item
    end
  end
end

On a sidenote, some people prefer not to use let at all, anywhere. The idea is that let is often used as a way to “DRY up” specs, but what is actually going on is that we’re trying to hide the complexity of the test setup, which is a code smell.

Incidental database state

Magic numbers and incidental state can sneak in specs that deal with database records.

describe Task do
  describe "#complete" do

   before do
     @task = create(:task)
   end

    it "completes the task" do
      @task.complete

      # Creating another completed task above while extending the test suite
      # would make this test fail.
      Task.completed.count.should eq(1)
    end

    # Take 2: aim to measure in relative terms:
    it "completes the task" do
      expect { @task.complete }.to change(Task.completed, :count).by(1)
    end
  end
end

Rails Testing Antipatterns: Fixtures and Factories

In the upcoming series of posts, we’ll explore some common antipatterns in writing tests for Rails applications. The presented opinions come from our experience in building web applications with Rails (we’ve been doing it since 2007) and is biased towards using RSpec and Cucumber. Developers working with other technologies will probably benefit from reading as well.

Antipattern zero: no tests at all

If your app has at least some tests, congratulations: you’re among the better developers out there. If you think that writing tests is hard — it is, but you just need a little more practice. I recommend reading the RSpec book if you haven’t yet. If you don’t know how to add more tests to a large system you inherited, I recommend going through Working Effectively with Legacy Code. If you have no one else to talk to about testing in your company, there are many great people to meet at events such as CITCON.

If you recognize some of the practices discussed here in your own code, don’t worry. The methodology is evolving and many of us have “been there and done that”. And finally, this is all just advice. If you disagree, feel free to share your thoughts in the comment section below. Now, onwards with the code.

Fixtures and factories

Using fixtures

Fixtures are Rails’ default way to prepare and reuse test data. Do not use fixtures.

Let’s take a look at a simple fixture:

# users.yml
marko:
  first_name: Marko
  last_name: Anastasov
  phone: 555-123-6788

You can use it in a test like this:

describe User do
  describe "#full_name" do
    it "is composed of first and last name" do
      user = users(:marko)
      user.full_name.should eql("Marko Anastasov")
    end
  end
end

There are a few problems with this test code:

  • It is not clear where the user came from and how it is set up.
  • We are testing against a “magic value” — implying something was defined in some code, somewhere else.

In practice these shortcomings are addressed by comment essays:

describe Dashboard do

  fixtures :all

  describe "#show" do
    before do
      # User with preferences to view posts about kittens
      # and in the group with special access to Burmese cats
      # with 4 friends that like ridgeback dogs.
      @user = users(:kitten_fan)
    end
  end
end

Maintaining fixtures of more complex records can be tedious. I recall working on an app where there was a record with dozens of attributes. Whenever a column would be added or changed in the schema, all fixtures needed to be changed by hand. Of course I only recalled this after a few test failures.

A common solution is to use factories. If you recall from the common design patterns, factories are responsible for creating whatever you need to create, in this case records. Factory Girl is a good choice.

Factories let you maintain simple definitions in a single place, but manage all data related to the current test in the test itself when you need to. For example:

FactoryGirl.define do
  factory :user do
    first_name "Marko"
    last_name  "Anastasov"
    phone "555-123-6788"
  end
end

Now your test can set the related attributes before checking for the expected outcome:

describe User do
  describe "#full_name" do
    before do
      @user = build(:user, :first_name => "Johnny", :last_name => "Bravo")
    end

    it "is composed of first and last name" do
      @user.full_name.should eql("Johnny Bravo")
    end
  end
end

A good factory library will let you not just create records, but easily generate unsaved model instances, stubbed models, attribute hashes, define types of records and more — all from a single definition source. Factory Girl’s getting started guide has more examples.

Factories pulling too many dependencies

Factories let you specify associations, which get automatically created. For example, this is how we say that creating a new Comment should automatically create a Post that it belongs to:

FactoryGirl.define do
  factory :comment do
    post
    body "groundbreaking insight"
  end
end

Ask yourself if creating or instantiating that post in every call to the Comment factory is really necessary. It might be if your tests require a record that was saved in the database, and you have a validation on Comment#post_id. But that may not be the case with all associations.

In a large system, calling one factory may silently create many associated records, which accumulates to make the whole test suite slow (more on that later). As a guideline, always try to create the smallest amount of data needed to make the test work.

Factories that contain unnecessary data

A spec is effectively a specification of behavior. That is how we look at it when we open one. Similarly, we look at factories as definitions of data necessary for a model to function.

In the first factory example above, including phone in User factory was not necessary, if there is not a validation of presence. If the data is not critical, just remove it.

Factories depending on database records

Adding a hard dependency on specific database records in factory definitions leads to build failures in CI environment. Consider the following example:

factory :active_schedule do 
  start_date Date.today - 1.month 
  end_date 1.month.since(Date.today) 
  processing_status 'processed' 
  schedule_duration ScheduleDuration.find_by_name('Custom') 
end

It is important to know that the code for factories is executed when the Rails test environment loads. This may not be a problem locally because the test database had been created and some kind of seed structure applied some time in the past. In the CI environment however the builds starts from a blank database, so the first Rake task it runs will fail. To reproduce and identify such issue locally, you can do db:drop, followed by db:setup.

One way to fix this is to use factory_girl’s traits:

factory :schedule_duration do
  name "Test Duration"

  trait :custom do
    name "Custom"
  end
end

factory :active_schedule do
  association :schedule_duration, :custom
end

Another is to defer the initialization in a callback. This however adds an implicit requirement that test code defines the associated record before the parent:

factory :active_schedule do
  before(:create) do |schedule|
    schedule.schedule_duration = ScheduleDuration.find_by_name('Custom')
  end
end

Got anything to add? I’d love to hear your comments below.

Update 2014-01-22: Changed fixtures examples to use key-based lookup.

December platform update with Ruby 2.1.0 final, Grunt, Leiningen & more

The upcoming platform update, scheduled for December 30th, brings a couple of interesting changes.

Ruby 2.1.0 final was released on Christmas, and now coming to your builds on Semaphore as well.

Firefox is going up to version 26 (is anyone counting those with browsers any more?). An upgrade of selenium-webdriver gem may be necessary if you’re using Cucumber.

To make it faster to build Node.js and web-only projects, we’re making Grunt, the JavaScript task runner, and Bower, a package manager for web, available preinstalled.

Leiningen, Clojure’s Swiss Army knife becomes available as well.

As usual, full details are in the changelog.

Have a wonderful New Year!

Upgrade your paranoia with Brakeman

Brakeman is a wonderful tool that can scan your Ruby on Rails application for common vulnerabilities. It’s a static code analyzer that can help you find security problems before you release the application.

Since you can introduce a vulnerability at any stage of you development process, it’s obvious that a tool should scan your application as often as possible, preferably on every commit. That makes it an ideal candidate for the continuous delivery pipeline.

Fortunately, it’s easy to make Brakeman part of your CI.

Install Brakeman by adding it to your Gemfile:

group :development do
  gem 'brakeman', :require => false
end

And add the following command to your build setup on Semaphore:

bundle exec brakeman -z

The -z flag tells Brakeman to fail if it finds any vulnerabilities. This means that your build will fail before any issues reach the production, letting you know you should fix the problem.

We definitely recommend that you introduce security checks to your continuous delivery pipeline, and Brakeman is a good choice for Rails apps.

Get future posts like this one in your inbox.

Follow us on