Skip to content

Commit

Permalink
Add ignore support via annotation
Browse files Browse the repository at this point in the history
Signed-off-by: Alex Ellis (OpenFaaS Ltd) <[email protected]>
  • Loading branch information
alexellis committed Oct 6, 2019
1 parent 32b6fee commit cc5b174
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 18 deletions.
22 changes: 15 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,33 +2,36 @@

Get a Kubernetes LoadBalancer where you never thought it was possible.

In cloud-based Kubernetes solutions, Services can be exposed as type "LoadBalancer" and your cloud provider will provision a LoadBalancer and start routing traffic, in another word: you get ingress to your service.
In cloud-based [Kubernetes](https://kubernetes.io/) solutions, Services can be exposed as type "LoadBalancer" and your cloud provider will provision a LoadBalancer and start routing traffic, in another word: you get ingress to your service.

inlets-operator brings that same experience to your local Kubernetes or k3s cluster (k3s/k3d/minikube/microk8s/Docker Desktop/KinD). The operator automates the creation of an [inlets](https://inlets.dev) exit-node on public cloud, and runs the client as a Pod inside your cluster. Your Kubernetes `Service` will be updated with the public IP of the exit-node and you can start receiving incoming traffic immediately.

## Who is this for?

This solution is for users who want to gain incoming network access (ingress) to their private Kubernetes clusters running on their laptops, VMs, within a Docker container, on-premises, or behind NAT. The cost of the LoadBalancer with a IaaS like DigitalOcean is around 5 USD / mo, which is 10 USD cheaper than an AWS ELB or GCP LoadBalancer.

Whilst 5 USD is cheaper than a "Cloud Load Balancer", this tool is for users who cannot get incoming ingress, not for saving money on public cloud.
Whilst 5 USD is cheaper than a "Cloud Load Balancer", this tool is for users who cannot get incoming connections due to their network configuration, not for saving money vs. public cloud.

## Status and backlog

This version of the inlets-operator is a early proof-of-concept, but it builds upon inlets, which is stable and widely used.

Backlog:
Backlog completed:
- [x] Provision VMs/exit-nodes on public cloud
- [x] Provision to [Packet.com](https://packet.com)
- [x] Provision to DigitalOcean
- [x] Automatically update Service type LoadBalancer with a public IP
- [x] Tunnel `http` traffic
- [x] Tunnel L7 `http` traffic
- [x] In-cluster Role, Dockerfile and YAML files
- [x] Raspberry Pi / armhf build and YAML file
- [ ] Ignore Services with `dev.inlets.manage: false` annotation

Backlog pending:
- [ ] Garbage collect hosts when CRD is deleted
- [ ] CI with Travis (use openfaas-incubator/openfaas-operator as a sample)
- [ ] ARM64 (Graviton/Odroid/Packet.com) build and YAML file
- [ ] ARM64 (Graviton/Odroid/Packet.com) Dockerfile/build and K8s YAML files
- [ ] Automate `wss://` for control-port
- [ ] Move control-port and `/tunnel` endpoint to high port i.e. `31111`
- [ ] Garbage collect hosts when CRD is deleted
- [ ] Provision to EC2
- [ ] Provision to GCP
- [ ] Tunnel any `tcp` traffic (using `inlets-pro`)
Expand Down Expand Up @@ -124,6 +127,8 @@ go build && ./inlets-operator --kubeconfig "$(kind get kubeconfig-path --name="
```

# Monitor/view logs

```sh
kubectl logs deploy/inlets-operator -f
```

Expand All @@ -147,7 +152,7 @@ kubectl logs deploy/nginx-1-tunnel-client

Check the IP of the LoadBalancer and then access it via the Internet.

Example with OpenFaaS, make sure you give the port a name of `http`:
Example with OpenFaaS, make sure you give the `port` a `name` of `http`, otherwise a default of `80` will be used incorrectly.

```yaml
apiVersion: v1
Expand All @@ -169,6 +174,9 @@ spec:
type: LoadBalancer
```
To ignore a service such as `traefik` type in: `kubectl annotate svc/traefik -n kube-system dev.in
lets.manage=false`

## Contributing

Contributions are welcome, see the [CONTRIBUTING.md](CONTRIBUTING.md) guide.
Expand Down
21 changes: 10 additions & 11 deletions controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -267,19 +267,10 @@ func (c *Controller) syncHandler(key string) error {
}

service, _ := c.serviceLister.Services(namespace).Get(name)
// if err != nil {
// // The InletsLoadBalancer resource may no longer exist, in which case we stop
// // processing.
// if errors.IsNotFound(err) {
// utilruntime.HandleError(fmt.Errorf("service '%s' in work queue no longer exists", key))
// return nil
// }

// return err
// }

if service != nil {
if service.Spec.Type == "LoadBalancer" {
if service.Spec.Type == "LoadBalancer" &&
hasIgnoreAnnotation(service.Annotations) == false {

tunnels := c.operatorclientset.InletsoperatorV1alpha1().Tunnels(service.ObjectMeta.Namespace)
ops := metav1.GetOptions{}
Expand Down Expand Up @@ -626,6 +617,7 @@ func (c *Controller) handleObject(obj interface{}) {
}
klog.V(4).Infof("Recovered deleted object '%s' from tombstone", object.GetName())
}

klog.V(4).Infof("Processing object: %s", object.GetName())
if ownerRef := metav1.GetControllerOf(object); ownerRef != nil {
// If this object is not owned by a Tunnel, we should not do anything more
Expand Down Expand Up @@ -657,3 +649,10 @@ func makeUserdata(authToken string) string {
systemctl start inlets && \
systemctl enable inlets`
}

func hasIgnoreAnnotation(annotations map[string]string) bool {
if v, ok := annotations["dev.inlets.manage"]; ok && v == "false" {
return true
}
return false
}

0 comments on commit cc5b174

Please sign in to comment.