Skip to main content

Posts

Showing posts from July, 2018

Gitlab Migration : Case Study

Yes, the heading is absolutely correct. Gitlab is moving from Microsoft Azure to Google Cloud Platform (GCP). I know what you might be thinking, but this decision wasn't made after Microsoft decided to buy Github. But, all this migration was being planned even before that. Most of the things in this article are taken from the original article of Gitlab:  Here They believe  Kubernetes  is the future. It's a technology that makes reliability at massive scale possible. This is why earlier this year they shipped native  integration with Google Kubernetes Engine  (GKE) to give GitLab users a simple way to use Kubernetes. Similarly, they've chosen GCP as their cloud provider because of their desire to run GitLab on Kubernetes. Google invented Kubernetes, and GKE has the most robust and mature Kubernetes support. Migrating to GCP is the next step in only plan to make GitLab.com ready for their mission-critical workloads. It's what they think. I think even if they con

Activation Functions (Neural Networks)

Activation functions are really important for a Neural Network to learn and make sense of something really complicated and Non-linear complex functional mappings between the inputs and response variable.They introduce non-linear properties to our Network.Their main purpose is to convert a input signal of a node in a A-NN to an output signal. That output signal now is used as a input in the next layer in the stack. Specifically in A-NN we do the sum of products of  inputs(X)  and their corresponding  Weights(W)  and apply a  Activation function f(x)  to it to get the output of that layer and feed it as an input to the next layer. In keras, we can use different activation function for each layer. That means that in our case we have to decide what activation function we should be utilized in the hidden layer and the output layer. Activations can either be used through an Activation layer, or through the activation argument supported by all forward layers: “Input times weights