Quantcast
Channel: Lzone Blog
Viewing all 243 articles
Browse latest View live

Helm Checking Keys

$
0
0

It is quite impressive how hard it is to check a map key in Go templates to do some simple if conditions in your Helm charts or other kubernetes templates.

At least for Helm there is a nice solution. For this you have to know that Helm uses the Sprig template library which has support for dict types. And the dict type provides a hasKey method:

{{- if hasKey .Values.mymap "mykey" }}
    # do something conditional here...
{{- end }}

Helm causes ‘Could not get apiVersions from Kubernetes’

$
0
0

When using Helm > 2.14.3 you can suddenly end up with

Could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs

errors when handling (installing, upgrading, removing) helm charts that use CRDs.

The (working) solution according to https://github.com/jetstack/cert-manager/issues/2273 is to downgrade Helm to version 2.14.3.

Also note the relative low quality of recent 2.x Helm versions with severe bugs near 2.14.3. In my experience 2.14.0, 2.14.1 and 2.14.2 are unusable due to a resource merging bug (https://github.com/helm/helm/issues/5750). While release 2.15.0 is unusable due to an integer enumeration bug (e.g. https://github.com/istio/istio/issues/18172).

So far I believe it is safe to stay with 2.14.3

Ssh login without interaction

$
0
0

This is a short summary what you need to avoid any type of interaction when accessing a machine by SSH.

Interaction Pitfalls:

  • Known hosts entry is missing.
  • Known hosts entry is incorrect.
  • Public key is incorrect or missing.
  • Keyboard Authentication is enabled when public key failed.
  • stdin is connected and the remote command waits for input.

Here is what you need to do to circumvent everything:

  • Ensure to use the correct public key (if necessary pass it using -i)
  • Ensure to pass "-o UserKnownHostsFile=/dev/null" to avoid termination when the known hosts key has changed (Note: this is highly insecure when used for untrusted machines! But it might make sense in setups without correctly maintained known_hosts)
  • Ensure to pass "-o StrictHostKeyChecking=no" to avoid SSH complaining about missing known host keys (caused by using /dev/null as input).
  • Pass "-o PreferredAuthentications=publickey" to avoid password querying when the public key doesn't work
  • Pass "-n" to avoid remote interaction
  • Example command line:

ssh -i my_priv_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey user@host -n "/bin/ls"

Helm Checking Keys

$
0
0

It is quite impressive how hard it is to check a map key in Go templates to do some simple if conditions in your Helm charts or other kubernetes templates.

At least for Helm there is a nice solution. For this you have to know that Helm uses the Sprig template library which has support for dict types. And the dict type provides a hasKey method:

{{- if hasKey .Values.mymap "mykey" }}
    # do something conditional here...
{{- end }}

You might also want to check the Helm Templates Cheat Sheet

Helm causes ‘Could not get apiVersions from Kubernetes’

$
0
0

When using Helm > 2.14.3 you can suddenly end up with

Could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs

errors when handling (installing, upgrading, removing) helm charts that use CRDs.

The (working) solution according to https://github.com/jetstack/cert-manager/issues/2273 is to downgrade Helm to version 2.14.3.

Also note the relative low quality of recent 2.x Helm versions with severe bugs near 2.14.3. In my experience 2.14.0, 2.14.1 and 2.14.2 are unusable due to a resource merging bug. While release 2.15.0 is unusable due to an integer enumeration bug.

So far I believe it is safe to stay with 2.14.3

Redis list all keys

$
0
0

If checking a Redis database you can list all entries using

KEYS <pattern>

where pattern is a Unix glob expression. Here are some examples

KEYS myprefix*      # All keys starting with 'myprefix'
KEYS *mysuffix      # All keys ending with 'mysuffix'
KEYS [a-c]*         # Everything starting with a,b or c

Note: that using “KEYS” is not good when matching a large set of keys as listing might take a lot of time.

Listing all hash keys

For hashes you can list there keys using

HKEYS myhash

Old helm chart repos will disappear

$
0
0

The old github based Helm chart repository is going to be deprecated soon and charts might be vanishing depending how this goes.

Quoting this Reddit comment:

  • May 13, 2020 At 6 months – when helm v2 goes security fix only – the stable and incubator repos will be de-listed from the Helm Hub. Chart OWNERS are encouraged to accept security fixes only
  • Nov 13, 2020 At 1 year, support for this project will formally end, and this repo will be marked obsolete

The Problem

Like with using dockerhub images, using Helm charts either from hub.helm.sh or from https://github.com/helm/charts is not safe in terms of reproducibility. Both the docker images the charts rely on as well as the chart definitions themselves might be unavailable at any time.

As with a vanishing docker image, your running deployments/pods won’t be affected at all once a chart is not available anymore.

The Solution

Same solution as with using external docker images that you have to archive and only use via your own docker registry. Each chart you use you have to keep in a chart repository of your own.

So if you do not have one set up a helm chart repository like

  • chartmuseum
  • jFrog artifactory
  • Nexus …

and backup all charts you currently use.

Also have a look at the list of Helm best practices

Helm Best Practices

$
0
0

As a reminder to myself I have compiled this list of opinionated best practices (in no particalur order) to follow when using Helm seriously:

General

  • Do not try to mix Helm2 and Helm3
  • Do not use Helm3 yet (as of 02/2020) as infrastructure as code tools do not support it yet
  • When using Openshift start with Helm3 only to avoid to many workarounds

Deployment

  • Before deployment check for and fix all releases in FAILED state
  • After deployment check for releases in FAILED state
  • Consider using --atomic when installing/upgrading charts to get automatic rollback on failures
  • Consider using --kubeconfig or --kube-ctxt to 100% ensure to hit the correct k8s cluster
  • Deploy your chart releases declaratively (infrastructure as code) using a tool like helmfile or terraform
  • Perform deployment using a docker image with all the infrastructure as code tooling you required. Use this tooling also when running test deployments from your laptop.
  • Wisely choose the proper Helm version and pin it in your CI/deployment tooling image.
  • Helm2: Actively check if the k8s cluster has the correct Helm version
  • Be careful will CRD installing charts. As resource application is asynchronous per default, any installed CRD might not become visible immediately and subsequent dependant charts might fail. This happens with cert-manager for example. As a safe workaround have a layered infrastructure as code setup where in a first step all CRDs are applied and only later all dependant chart releases are applied.

IAM

  • When using kube2iam annotate all namespaces you create with a role whitelisting pattern/definition. Use the namespace config chart mentioned above to do so. This prevents erroneous or evil pod IAM annotations to become effective.

Monitoring / Testing

  • Check for releases in FAILED state
  • Always use periodic helm diff or helmfile diff to monitor your cluster for unapplied changes
  • Run both upgrade and fresh install tests of your helm chart releases
  • Be careful with waiting for Helm releases to be finished before running checks.

Troubleshooting

  • When everything is stuck remove the chart with helm delete --purge <chart release> and reinstall it. Without --purge a new installation might fail

Secrets

  • From the start use the helm secrets plugin to provide encrypted secrets using CI

Configuration

  • Alway run your deployment from a clean ~/.helm setup. Otherwise it might be affected by other repo settings. Use a pre-build docker image to ensure a clean environment.
  • For reproducibility: always overwrite the chart image: value to an image archived on your own chart repository
  • For reproducibility: pin chart releases to exact versions
  • For reproducibility and release management: isolate the version definitions into a per cluster/stage separate YAML config (aka release descriptor)
  • Use a generic chart for namespace configuration (quotas, security settings…). For example https://github.com/zloeber/helm-namespace
  • Separate values.yaml from release declarations
  • From the beginning start using configuration environments. Either using the infrastructure as code tools support e.g. helmfile or by creating a directory layout matching your environments and manually select values.yaml it when applying releases.

Repos

  • Declare your repos using an infrastructure as code tool (see above)
  • Do not use the legacy chart repo anymore
  • For reproducibility: Host your own repository with an archive of all chart versions you ever used (chartmuseum, Nexus, jFrog artifactory will do)

When writing charts

  • Never hardcode the namespace
  • Write a good output template
  • Use helm lint to check your results
  • Use helm install --debug --dry-run ./<chart> for template rendering output by tiller
  • Use helm template ./<chart> for local template rendering output

That’s all for now. If this list did help you or not, or you want to add points consider writing a comment!

Also check out the Helm cheat sheet!


Jenkins list all pod templates

$
0
0

When using the Jenkins kubernetes plugin you can list all active pod templates like this

import jenkins.model.*
import org.csanchez.jenkins.plugins.kubernetes.*

if (Jenkins.instance.clouds) {
   cloud = Jenkins.instance.clouds.get(0) 
   cloud.templates.each { t ->
      println t.getName()
   }
}

How to use custom css with jekyll minima theme

$
0
0

When providing this blog with some custom CSS to better format code examples I had troubles applying several of the online suggestions on how to add custom CSS in a Jekyll setup with Minima theme active.

Problem: Finding the right theme file to overrule

So the issue was the for the popular Jekyll theme “Minima” there were different solutions telling to modify files like

  • _includes/minima/custom.scss
  • assets/minima/styles.scss
  • assets/main.scsss

and so on. The problem is that different Minima versions did use different paths internally and when overriding the rules you need to choose the right path.

Solution: Find the right file path to overrule

The way to go is to check your generated HTML for the asset name. For example run

grep "stylesheet.*css" _site/index.html

In my case I got

<link rel="stylesheet" href="/blog/assets/main.css">

and with “/blog” being my base URL I knew I needed to supply the file assets/main.scss.

Do proper style inheritance

Now when writing assets/main.scss it is necessary to source the active theme. The best way is to do it by using templating for the theme name and not hard-coding it as in @import "minima". This way it might magically work once you decide to switch your theme:

---
---

@import "{{ site.theme }}";

Below this header you can add additional CSS rules as you like!

How to get docker buildkit to show run command output

$
0
0

When you use Docker’s new BuildKit build engine either by

DOCKER_BUILDKIT=1 docker build ...

or in most recent Docker v19 using the new buildx command

docker buildx build ...

then you won’t see any output from your RUN steps. For example the following Dockerfile

FROM alpine

RUN echo Hello

then you won’t see a ‘Hello’ in the output (produced with docker version 18.09.7):

$ 
$ DOCKER_BUILDKIT=1 docker build . --no-cache
[+] Building 0.7s (6/6) FINISHED                                                
 => [internal] load build definition from Dockerfile                       0.0s
 => => transferring dockerfile: 37B                                        0.0s
 => [internal] load .dockerignore                                          0.0s
 => => transferring context: 2B                                            0.0s
 => [internal] load metadata for docker.io/library/alpine:latest           0.0s
 => CACHED [1/2] FROM docker.io/library/alpine                             0.0s
 => [2/2] RUN echo Hello                                                   0.6s
 => exporting to image                                                     0.0s
 => => exporting layers                                                    0.0s
 => => writing image sha256:ee9833c6e7bfa7e64fa74b1aa5da5a1818dc881929651  0.0s
$

So while it tells about the RUN step, it doesn’t give you the output.

Now with BuildKit comes a new option for docker build named --progress that let’s you switch between three output modes:

  • auto (default)
  • plain
  • tty

Interestingly there are sources that say ‘tty’ is supposed to be the legacy output. But at least for me it does give no different output then ‘auto’. I guess because ‘auto’ selects ‘tty’ and ‘tty’ is what it is…

So this leave ‘plain’ which produced a very interesting block output, which actually lists the RUN command result:

$ DOCKER_BUILDKIT=1 docker build --progress=plain .
[...]

#5 [2/2] RUN echo Hello
#5       digest: sha256:a3caaceea4f05ba90a9dea76e435897fe6454097eb90233e58530b6e1e7f7a53
#5         name: "[2/2] RUN echo Hello"
#5      started: 2020-03-13 18:52:22.20543247 +0000 UTC
#5 0.496 Hello
#5    completed: 2020-03-13 18:52:22.750641665 +0000 UTC
#5     duration: 545.209195ms

[...]

One can argue about usability of this, still this is the way to go to get command output when using BuildKit.

Jekyll collections without frontmatter

$
0
0

Researching this for some hours I want to document it:

Collections without frontmatter are not possible

The findings:

  • Worked in very old Jekyll versions (2.x)
  • Not even supported by the jekyll-optional-front-matter plugin
  • Using plugin jekyll-title-from-headings also doesn’t help

So if you have to work with input files without front matter you have to treat them as posts or pages to avoid Jekyll just copying the Markdown to the target directory.

Workaround

If you have a use case where you did rely on the collection list being available under site.<collection> you could iterate over all posts/pages of a common category instead.

If you keep all the documents in a subdir setting the category by default from matter can help:

defaults:
   - scope:
       path: subdir
     values:
       category: name-of-collection

Http requests from jenkins pipeline

$
0
0

This post provides a summary of the possibilities to perform HTTP requests from a Jenkins pipeline.

Overview

There are the following possibilities

  • Just doing a curl in a shell
  • Using Groovy URL object
  • Using plain old Java networking methods
  • Using a Jenkins plugin like http_request

Also security wise to properly handle the Jenkins sandboxing/script approval the call might want to be done

  • inline (with code inside the pipeline)
  • in a global shared libary

Sandboxing and Script Approval

In a properly configured Jenkins setup you usually have explicit script approval active and to use non-default libraries you need to extend the signature whitelist. Alternatively you can provide the required HTTP request functionality via plugin or global shared library, both of which are not subject to the script approval/signature whitelisting.

VariantImpact when used inlineImpact when used in plugin/global shared library
curlnonenone
Groovy URLsignature approval needednone
Java networkingsignature approval needednone
Pluginnonen.a.

While using curl or a plugin have no security impact on pipeline development, they both have to be provided in terms of setup: curl needs to be installed on Jenkins agents and the plugin has to be installed and maintained in Jenkins setup, both of which is usually a ITOps/DevOps tasks.

In terms of actively and continuously developing required functionality I believe a global shared library is the way to go. This is both because it can be easily configured and development can happen by testing pipelines against new feature branches of the library.

Examples Snippets

Below you find example snippets for the HTTP request mechanism mentioned above.

curl

 sh 'curl https://google.com'

Groovy

From https://stackoverflow.com/questions/34682099/how-to-call-rest-from-jenkins-workflow

 def get = new URL("https://httpbin.org/get").openConnection();
 def getRC = get.getResponseCode();
 println(getRC);
 if(getRC.equals(200)) {
    println(get.getInputStream().getText()); 
 }

Using Java base libraries

Copied from

import java.io.BufferedReader
import java.io.InputStreamReader
import java.io.OutputStreamWriter
import java.net.URL
import java.net.URLConnection

def sendPostRequest(urlString, paramString) {
    def url = new URL(urlString)
    def conn = url.openConnection()
    conn.setDoOutput(true)
    def writer = new OutputStreamWriter(conn.getOutputStream())

    writer.write(paramString)
    writer.flush()
    String line
    def reader = new BufferedReader(new InputStreamReader(conn.getInputStream()))
    while ((line = reader.readLine()) != null) {
      println line
    }
    writer.close()
    reader.close()
}

sendPostRequest("https://google.com", "")

http_request plugin

 def response = httpRequest 'http://localhost:8080/jenkins/api/json?pretty=true'
 println("Status: "+response.status)
 println("Content: "+response.content)

Find broken Helm3 releases

$
0
0

With Helm3 find releases in unexpected state:

 helm ls -A -o json | jq  -r '.[] | select(.status != "deployed") | .name'

ffmpeg video transcoding from Nautilus

$
0
0

I want to share this little video conversion script for the GNOME file manager Nautilus. As Nautilus supports custom scripts being executed for selected files I wrote a script to do allow video transcoding from Nautilus.

How it looks like

Once installed you get a new menu item like this

Nautilus context menu

when you start the script (and have zenity installed) it will prompt you for a transcoding profile:

Nautilus transcoding profile

The list of profiles is hard-coded in the script, but should be easy to extend.

How to install it

Place the following script in ~/.local/share/nautilus/scripts/ffmpeg-convert.sh and run chmod a+x on it.

#!/bin/bashprofiles="\
mp4_h264_400k_aac_128k
mp4_h264_600k_aac_128k
mp4_h264_1000k_aac_128k
mp3_128k"# If zenity is installed prompt for parametersif command-v zenity >/dev/null;then
	profile=$(zenity --width=480 --title="ffmpeg convert"\--text="Transcoding Profile"\--entry\--entry-text=$profiles)if[$?-ne 0 ];then
		exit# On Zenity 'Cancel'fi
fi# If zenity failed/missing use default (first option)if["$profile"=""];then
	profile=$(echo"$options" | head-1)fi# Build optionscase"$profile"in
	mp3_128k)options="-vn -acodec mp3 -b:a 128k"extension=mp3
	;;*)profile="${profile//_/ }"set--${profile}options="-vcodec $2 -b:v $3 -acodec $4 -b:a $5"extension=$1;;esacwhile read file;do
	output="${file/\.*/.}$extension"echo ffmpeg -y-i"$file"$options"$output"if ffmpeg -y-i"$file"$options"$output";then
		notify-send "Ready: '$(basename"$output")'"else
		notify-send "Converting '$file' failed!"fi
done< <(echo"$NAUTILUS_SCRIPT_SELECTED_FILE_PATHS")

Jenkins search for unmasked passwords

$
0
0

When you run a larger multi-tenant Jenkins instance you might wonder if everyone properly hides secrets from logs. The script below needs to be run as admin and will uncover all unmasked passwords in any pipeline job build:

for (job in Jenkins.instance.getAllItems()) {
    try {
        if (job.hasProperty("builds")) {
            if (!job.builds.isEmpty()) {
                for (build in job.builds) {
                    if (build.hasProperty("log")) {
                        if (build.log =~ /password=[^\*]/) {
                            println "Found unmasked password in: ${job.fullName} #${build.id}"
                        }
                    } else {
                        println "No log available for ${job.fullName} #${build.id}"
                    }
                }
            } else {
                println "Builds empty for ${job.fullName}"
            }
        } else {
            println "Builds not available for ${job.fullName}"
        }
    } catch (Exception e) {
        println "[ ERROR ] Skipping due to ${e}. Current job: ${job.fullName}"
    }
}

Note that the script can only identify unmasked passwords in pipeline jobs that have at least one run.

Adding custom ca certificates in openshift

$
0
0
This post documents quite some research through the honestly quite sad Openshift documentation. The content below roughly corresponds to Openshift releases 4.4 to 4.7 ## How to add custom CAs? When you Openshift cluster does access external services that use CAs not contained in CA bundle of the underlying Redhat OS you will see errors like this x509: certificate signed by unknown authority This error will appear when you access the external service using Openshift routes (e.g. when doing an OAuth flow for login). The error message is from haproxy which serves the routes and blocks the unknown CA. The solution is to add your custom CAs. But where? ## Solution 1: Add it to the Redhat OS CA bundle This is done by providing a `MachineConfig` resource. Here is the [example](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.3/html-single/authentication/index) from the Redhat documentation: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-examplecorp-ca-cert spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVORENDQXh5Z0F3SUJBZ0lKQU51bkkwRDY2MmNuTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdsTVFzd0NRWUQKV1FRR0V3SlZVekVYTUJVR0ExVUVDQXdPVG05eWRHZ2dRMkZ5YjJ4cGJtRXhFREFPQmdOVkJBY01CMUpoYkdWcApBMmd4RmpBVUJnTlZCQW9NRFZKbFpDQklZWFFzSUVsdVl5NHhFekFSQmdOVkJBc01DbEpsWkNCSVlYUWdTVlF4Ckh6QVpCZ05WQkFNTUVsSmxaQ0JJWVhRZ1NWUWdVbTl2ZENCRFFURWhNQjhHQ1NxR1NJYjNEUUVKQVJZU2FXNW0KWGpDQnBURUxNQWtHQTFVRUJoTUNWVk14RnpBVkJnTlZCQWdNRGs1dmNuUm9JRU5oY205c2FXNWhNUkF3RGdZRApXUVFIREFkU1lXeGxhV2RvTVJZd0ZBWURWUVFLREExU1pXUWdTR0YwTENCSmJtTXVNUk13RVFZRFZRUUxEQXBTCkFXUWdTR0YwSUVsVU1Sc3dHUVlEVlFRRERCSlNaV1FnU0dGMElFbFVJRkp2YjNRZ1EwRXhJVEFmQmdrcWhraUcKMHcwQkNRRVdFbWx1Wm05elpXTkFjbVZrYUdGMExtTnZiVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUApCRENDQVFvQ2dnRUJBTFF0OU9KUWg2R0M1TFQxZzgwcU5oMHU1MEJRNHNaL3laOGFFVHh0KzVsblBWWDZNSEt6CmQvaTdsRHFUZlRjZkxMMm55VUJkMmZRRGsxQjBmeHJza2hHSUlaM2lmUDFQczRsdFRrdjhoUlNvYjNWdE5xU28KSHhrS2Z2RDJQS2pUUHhEUFdZeXJ1eTlpckxaaW9NZmZpM2kvZ0N1dDBaV3RBeU8zTVZINXFXRi9lbkt3Z1BFUwpZOXBvK1RkQ3ZSQi9SVU9iQmFNNzYxRWNyTFNNMUdxSE51ZVNmcW5obzNBakxRNmRCblBXbG82MzhabTFWZWJLCkNFTHloa0xXTVNGa0t3RG1uZTBqUTAyWTRnMDc1dkNLdkNzQ0F3RUFBYU5qTUdFd0hRWURWUjBPQkJZRUZIN1IKNXlDK1VlaElJUGV1TDhacXczUHpiZ2NaTUI4R0ExVWRJd1FZTUJhQUZIN1I0eUMrVWVoSUlQZXVMOFpxdzNQegpjZ2NaTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnR0dNQTBHQ1NxR1NJYjNEUUVCCkR3VUFBNElCQVFCRE52RDJWbTlzQTVBOUFsT0pSOCtlbjVYejloWGN4SkI1cGh4Y1pROGpGb0cwNFZzaHZkMGUKTUVuVXJNY2ZGZ0laNG5qTUtUUUNNNFpGVVBBaWV5THg0ZjUySHVEb3BwM2U1SnlJTWZXK0tGY05JcEt3Q3NhawpwU29LdElVT3NVSks3cUJWWnhjckl5ZVFWMnFjWU9lWmh0UzV3QnFJd09BaEZ3bENFVDdaZTU4UUhtUzQ4c2xqCjVlVGtSaml2QWxFeHJGektjbGpDNGF4S1Fsbk92VkF6eitHbTMyVTB4UEJGNEJ5ZVBWeENKVUh3MVRzeVRtZWwKU3hORXA3eUhvWGN3bitmWG5hK3Q1SldoMWd4VVp0eTMKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= filesystem: root mode: 0644 path: /etc/pki/ca-trust/source/anchors/examplecorp-ca.crt Using this you can place multiple files at the OS level. The serious drawback: you have to **reboot all nodes!!!** to make it active (as the CA bundle is compiled on bootup). ## Solution 2: Using the global proxy configuration at runtime This works well, if you have the global proxy enabled anyway. apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://:@: httpsProxy: http://:@: noProxy: example.com readinessEndpoints: - http://www.google.com - https://www.google.com trustedCA: name: user-ca-bundle <--------------- The proxy object has a key to reference a user provided CA bundle config map. Although this only helps you with the proxy CA certificate, I guess... The [documentation page](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html/networking/configuring-a-custom-pki#installation-configure-proxy_configuring-a-custom-pki) for this is actually really bad, giving no indication wether the 3 sections are steps to be done together or are configuration alternatives. Given the name of this page "CONFIGURING A CUSTOM PKI" one would expect an actual useful overview, but hey... ## Solution 3: Using the global proxy configuration at install time This is different than above as you can use the "additionalTrustBundle" field in your `install-config.yaml` to pass extra CA certificates: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://:@: httpsProxy: http://:@: noProxy: example.com additionalTrustBundle: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- **Note**: that this does not work without the proxy enabled! Just configuring additionalTrustBundle creates a config map `user-ca-bundle` which will not be used without the proxy. Why the "additionalTrustBundle" is just a trust bundle for the proxy and not a global one is (at least for me) not logical, but hey... ## Solution 4: OAuth specific trust bundle If you use case is a custom CA used by your identity provider you are in luck and can provide an extra config map in `openshift-config` that provides your certificates. Please note that this config map uses a different key `ca.crt` instead of `ca-bundle.crt` as all the other config maps do. Don't worry you can still pass a bundle there! Here is the example snippet from the [documentation](https://docs.openshift.com/container-platform/4.6/authentication/identity_providers/configuring-oidc-identity-provider.html#identity-provider-oidc-CR_configuring-oidc-identity-provider): apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: oidcidp mappingMethod: claim type: OpenID openID: clientID: ... clientSecret: name: idp-secret ca: name: ca-config-map <---------------- [...] ## Conclusion To be honest there is no concept, just patch work. Different places with similar solutions and no comprehensive documentation. You might want to book some enterprise consultants from IBM here. Please drop a note in the comments if this helps or my research is missing important parts! HTH

Easily fix async video with ffmpeg

$
0
0
## 1. Correcting Audio that is too slow/fast This can be done using the `-async` parameter of ffmpeg which according to the documentation *"Stretches/squeezes" the audio stream to match the timestamps*. The parameter takes a numeric value for the samples per seconds to enforce. ffmpeg -async 25 -i input.mpg -r 25 Try slowly increasing the -async value until audio and video matches. ## 2. Auto-Correcting Time-Shift ### 2.1 Audio is ahead When audio is ahead of video: As a special case the `-async` switch auto-corrects the start of the audio stream when passed as `-async 1`. So try running ffmpeg -async 1 -i input.mpg ### 2.2 Audio lags behind Instead of using `-async` you need to use `-vsync` to drop/duplicate frames in the video stream. There are two methods in the manual page "-vsync 1" and "-vsync 2" and an method auto-detection with "-vsync -1". But using "-map" it is possible to specify the stream to sync against. ffmpeg -vsync 1 -i input.mpg ffmpeg -vsync 2 -i input.mpg Interestingly Google shows people using `-async` and `-vsync` together. So it might be worth experimenting a bit to achieve the intended result :-) ## 3. Manually Correcting Time-Shift If you have a constantly shifted sound/video track that the previous fix doesn't work with, but you know the time shift that needs to be corrected, then you can easily fix it with one of the following two commands: ### 3.1 Audio is ahead Example to shift by 3 seconds: ffmpeg -i input.mp4 -itsoffset 00:00:03.0 -i input.mp4 -vcodec copy -acodec copy -map 0:1 -map 1:0 output_shift3s.mp4 Note how you specify your input file 2 times with the first one followed by a time offset. Later in the command there are two `-map` parameters which tell ffmpeg to use the time-shifted video stream from the first `-i input.mp4` and the audio stream from the second one. I also added `-vcodec copy -acodec copy` to avoid reencoding the video and loose quality. These parameters have to be added after the second input file and before the mapping options. Otherwise one runs into mapping errors. ### 3.2 Audio lags behind Again an example to shift by 3 seconds: ffmpeg -i input.mp4 -itsoffset 00:00:03.0 -i input.mp4 -vcodec copy -acodec copy -map 1:0 -map 0:1 output_shift3s.mp4 Note how the command is nearly identical to the previous command with the exception of the `-map` parameters being switched. So from the time-shifted first `-i input.mp4` we now take the audio instead of the video and combine it with the normal video.

Convert json to yaml in linux bash

$
0
0
## Is it possible with bash? The short and reasonable answer is: no! Bash won't do it in a reliable way for you. ## Basic Linux Tools that convert JSON to YAML The goal here should be to use standard tools that in the best case you do not need to install. The basic scripting languages and their standard modules are good candidates. Also `yq` (the YAML pendant to [jq](/cheat-sheet/jq)) becoming more popular might be soon available in Linux distros. ### Ruby ruby -ryaml -rjson -e 'puts YAML.dump(JSON.parse(STDIN.read))' 0)[] | [yamlify2] | " - \(.[0])", " \(.[1:][])" ) // . ; And convert using jq -r yamlify2 input.json

Store multi-line secrets in Azure DevOps Pipeline Libraries

$
0
0
Azure DevOps wants you to provide secrets to pipelines using a so called `pipeline library`. You can store single line strings as `secrets` in the pipeline library. You cannot though store multi-line strings as `secrets` without messing up the line-breaks. ## Storing multi-line secrets as "Secret Files" If you need a multi-line secret (e.g. an SSH private key or a kubectl context) you need to provide this secret as a `secret file`. When you open the library click the 2nd tab 'Secret Files' and upload your secret. When accessing secrets and secret files via variable in the pipeline there is no difference in using the secret variable!
Viewing all 243 articles
Browse latest View live