Systemd o Upstart o System V

Rápido recordatorio de  encontrar fácilmente (no es 100% seguro, dependerá de las distros) si estamos usando systemd/upstart/systemV

 

if [[ `/sbin/init --version` =~ upstart ]]; then echo using upstart;
elif [[ `systemctl` =~ -\.mount ]]; then echo using systemd;
elif [[ -f /etc/init.d/cron && ! -h /etc/init.d/cron ]]; then echo using sysv-init;
else echo cannot tell; fi
strings /sbin/init | grep -q "/lib/systemd" && echo SYSTEMD
strings /sbin/init | grep -q "sysvinit" && echo SYSVINIT
strings /sbin/init | grep -q "upstart" && echo UPSTART

Máquina de pruebas es una Amazon Linux AMI release 2017.09

El primer test


# bash check_bash.sh
using upstart

El segundo test


UPSTART

El caso en  concreto para este Linux AWS AMI es que funciona con una mezcla de systemV y upstart y me despistó un poco. Un ejemplo de upstart script sería:


description "node-exporter from prometheus"
start on filesystem or runlevel [2345]
stop on runlevel [!2345]

respawn
umask 022
chdir /
# console log - uncomment log stdout/stderr to /var/log/upstart/
# console none # Ubuntu 12.04++ requires explicitly saying we don't want to log anything

exec /usr/local/bin/node_exporter

Listar servicios


# initctl list

Arrancar upstart service


# sudo initctl start node-export
node-export start/running, process 21704

Comprobar upstart service


# sudo initctl status node-export
node-export start/running, process 21704

El mismo caso pero en systemd


[Unit]
Description=Node Exporter

[Service]
User=prometheus
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=default.target

 


# systemctl daemon-reload
# systemctl enable node_exporter.service
# systemctl start node_exporter.service

Links

 

Terraform – deposed

Peleando con Terraform. Llamo a un módulo que crea un aws lc y un aws asg pero contiene un error, que hace actualizar el lc todo el rato. Me vuelvo loco y borro los recursos por la fuerza y Terraform se buggea. Obtengo este error:

- module.celery_asg.aws_launch_configuration.lc-app (deposed)

In essence terraform complains, if I understand it correctly, about not being able to remove non-existent resource that is a dependency for another non-existent resource.

Lo que pude hacer para resolverlo fue, primero lógicamente corregir los errores en las variables que le pasaba al módulo (condicionales en otro módulo) y luego

$ terraform plan
[...]
Plan: 20 to add, 0 to change, 1 to destroy.

Me detecta cosas a destruir que no existen

$ terraform state rm module.eks.aws_launch_configuration.eks
1 items removed.
Item removal successful.

$ terraform plan
[...]
Plan: 21 to add, 0 to change, 0 to destroy

Links
https://github.com/hashicorp/terraform/issues/18643

K8s / Kubernetes – notas de consulta rápida

Breves apuntes de consulta rápida, se irá actualizando poco a poco.

Servicios

  • ClusterIP
    • Expone:
      • spec.clusterIp:spec.ports[*].port
    • Solo puedes acceder este servicio desde dentro del cluster. Accesible desde su spec.clusterIp port. Si hay configurado el spec.ports[*].targetPort enrutará desde el puerto al targetPort. La IP que recibes cuando llamas via kubectl get services es la IP asignada a ese servicio dentro del cluster, internamente.
  • NodePort
    • Expone:
      • <NodeIP>:spec.ports[*].nodePort
      • spec.clusterIP:spec.ports[*].port
    • Si se accede a este servicio con nodePort, desde la IP externa del nodo, enrutará la petición a spec.clusterIP:spec.ports[*].port, lo cual resultará en enrutarla hacia spec.ports[*].targetPort si está configurado. Este servicio también puede ser accedido en la misma forma que ClusterIP.
    • Tu NodeIPS son las direcciones IP externas de los nodos. Tú no puedes acceder al servicio desde ClusterIP:spec.ports[*].nodePort
  • LoadBalancer
    • Expone
      •  <spec.loadBalancerIp:spec.ports[*].port
      • <NodeIP>:spec.ports[*].nodePort
      • spec.clusterIp:spec.ports[*].port
    • Puedes acceder al servicio desde la IP de tu balanceador el cual enrutará tu petición al nodePort, el cual enrutará a su vez al puerto del ClusterIP. Puedes acceder a este servicio como harías en un NodePort o un ClusterIP.

¿Qué pasa cuando generamos recursos via yaml en AWS?

  • Se genera un ALB mediante service tipo node port e ingress
  • Se genera ELB mediante service tipo loadbalancer

Mapear puerto de namespace en local

Ejemplo práctico: un despliegue de blackbox exporter dentro del namespace production. Queremos poder acceder via http a ese pod:puerto localmente para consultar métricas. Mapeamos con port-forward como sigue:

kubectl port-forward blackbox-exporter-prometheus-blackbox-exporter-79c8455888-r57kx 9115:9115 -n production

Logs

pod multicontainer

kubectl logs apo-api-analitica-1560416400-7bf2l --all-containers

Herramientas

Links

Continue reading

AWS Solutions Architect Associate (SAA) 2018 – II

Topics covered: Cloudfront,Storage Gateway, Snowball

CloudFront

Terminology

  • Edge Location: this is the location where content is cached and can be used to write too. Separate to an AWS Region/Az. Actually, there are more Edge Locations than AWS regions. EdgeLocation cached files from the origin speeds up delivery to videos, images, etc. Used for S3 Transfer Acceleration.
  • Origin can be S3 Bucket, EC2 Instance, ELB or Route53
  • Distribution: the name was given to our CDN
  • Web distribution: typically used for websites
  • RTMP: Used for media streaming
Objects are cached for the life of the TTL (seconds, 24 hours, 86400 by default)
You can clear cached objects, (but you will be charged)

 Storage Gateway

  • All data transferred between any type of gateway appliance and AWS storage is encrypted using SSL.
  • By default, all data stored by AWS Storage Gateway in S3 is encrypted server-side with Amazon S3-Managed Encryption Keys (SSE-S3).
  • Also when using the file gateway, you can optionally configure each file share to have your objects encrypted with AWS KMS-Managed Keys using SSE-KMS.

Modes available

  • File Gateway (NFS): Flat files in S3, pdfs, pictures, videos,(stored in S3)
  • Volume Gateway (iSCSI): virtual hard disk
    • Stored Volumes: entire copy of dataset is stored on site and is backup async to S3.
    • Cached Volumes: entire dataset is stored on S3 and the most frequently accessed data is cached on site.
  • Gateway-Virtual Tape Library (VTL): backup, create virtual tapes and send to S3.

File Gateway (NFS)

Files stored as objects in s3 Bucket accessed through an NFS. Ownership, permissions, timestamps durably stored in S3. Once files are transferred to S3, they can be managed as native S3 objects, and bucket policies such as versioning, lifecycle management, cross-region replication, etc

Volume Gateway (iSCSI)

Virtual Hard Disks presented at on-premises via iSCSI and you backup them to S3. Data written to these volumes can be asynchronously backed up as point-in-time snapshots of your volumes and stored as EBS snapshots. Snapshots are incremental.

Stored Volumes

Let you store your primary data locally while asynchronously backing up that data to AWS. Low latency access to their entire datasets, with durable backups. Data written to your stored volumes is stored on your on-premises storage hardware. This data is backed up to AWS S3 in the form of EBS Snapshots.

Cached Volumes

Lets you use S3 as your primary data storage while retaining frequently accessed data locally in your storage gateway. Most recent data is on-premises storage hardware. Old data is in S3. Cached volumes offer substantial cost savings on primary storage and minimize the need to scale your storage on-premises.

Tape Gateway

Supported by NetBackup, Backup Exec, Veaam, etc. Instead of having physical tapes, you have virtual tapes.

SnowBall

Old service was Import/Export
ReInvent 2015 announced

Import/Export

You send your own disks

Snowball Standard

Snowball is petabyte-scale data transport that uses secure appliances to transfer large amounts of data into AWS.
AWS Snowball device costs less than AWS Snowball Edge
Up to 80 TB
Snowball in all regions.
256-bit encryption.
TPM standard industry.
Transferring data is simple, fast, secure and cheap. Once the data transfer job has been processed and verified, AWS performs a software erasure of Snowball Appliance.

SnowBall Edge

100 TB transfer device with on board storage and compute capabilities.
To move into and out of AWS, temporary storage tier for large local datasets.
SnowBall AWS datacenter in a box

Snowmobile

Is a truck container.
100 PetaBytes. Massive volumes of data to the cloud. Only US.
Datacenter migration to AWS

AWS Solutions Architect Associate (SAA) 2018 – I part

En este post dejaré algunas notas que tomé para poder estudiar para el AWS SASS. Utilizo Evernote para guardar notas pero con el paso del tiempo he decidido retomar el blog ya que es una mejor manera de tener mis notas actualizadas. Actualizaré el post poco a poco. Las notas serán en inglés porque así es como hice el curso.

Las definiciones de los diferentes servicios las tomo de o bien la documentación de AWS o bien de los comentarios del instructor del curso que hice.

 Topics covered: S3

Continue reading