<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[FG]]></title><description><![CDATA[spare brain website]]></description><link>https://detiers.com/</link><generator>Ghost 5.33</generator><lastBuildDate>Wed, 06 May 2026 11:32:05 GMT</lastBuildDate><atom:link href="https://detiers.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Istio Egress Filtering Deep Dive]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h2 id="context">Context</h2>
<!--kg-card-end: markdown--><p>In our Kubernetes cluster, in order to control connectivity between pods, we&apos;re using Istio as a service mesh. Practically, it means that every pod in our cluster has an envoy proxy attached to it, capturing traffic on both way : ingress and egress.</p><p>By default, every pod can</p>]]></description><link>https://detiers.com/egress-filtering/</link><guid isPermaLink="false">658d3f982c5fe000017516a7</guid><dc:creator><![CDATA[FG]]></dc:creator><pubDate>Tue, 09 Apr 2024 11:40:24 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h2 id="context">Context</h2>
<!--kg-card-end: markdown--><p>In our Kubernetes cluster, in order to control connectivity between pods, we&apos;re using Istio as a service mesh. Practically, it means that every pod in our cluster has an envoy proxy attached to it, capturing traffic on both way : ingress and egress.</p><p>By default, every pod can reach any other pod within the mesh, however we want to be able to only authorize traffic to legitimate applications and forbid everything else.</p><p>It may sound simple : after all an Istio resource exists to do that, but we faced some caveats during the deployment. This article deep dive into the egress filtering implementation and demonstrate what is exacly modified in the envoy configuration when you manipulate Istio resources.</p><!--kg-card-begin: markdown--><h2 id="the-istio-control-plane">The Istio Control Plane</h2>
<!--kg-card-end: markdown--><p>The brain of our service mesh is <code>istiod</code> : aka the Istio control plane. </p><p>This piece of software is responsible to :</p><ul><li>gather the configuration described in the Istio resources (Virtual services, ServiceEntry, DestinationRules...) but also in Kubernetes itself (Services, Endpoints&#x2026;)</li><li>build the corresponding envoy configuration</li><li>push the configuration to every envoy proxy in the mesh.</li></ul><blockquote>In this page I usually use the term of &#x201C;istio proxy&#x201D; instead of &#x201C;envoy proxy&#x201D;, but they are strictly equivalent. Envoy is the real name of the proxy software used by Istio.</blockquote><h2 id="envoy-terminology">Envoy terminology</h2><p>The following is a simplified view of Envoy.</p><figure class="kg-card kg-image-card"><img src="https://lh7-us.googleusercontent.com/mB32wbbqvAZCGQiuzpONAHS2XMJgk3d9uVcpbZBUjb4iUKQl4G0v5q7KaRFDRcTJ-hdygvOaEu9lcReUUFQ_-47swlt2do-aFRn1gWQfsndyDagW5zxOq7OU1Qe4rxD2D6_laIV1rR_W-RlJdCN4I-S1=nw" class="kg-image" alt loading="lazy" width="960" height="439"></figure><p>Excerpt from the official Envoy documentation:</p><p><strong><strong>Listener</strong></strong>: A listener is a named network location (e.g., port, unix domain socket, etc.) that can be connected to by downstream clients. Envoy exposes one or more listeners that downstream hosts connect to.</p><p><strong><strong>Cluster</strong></strong>: A cluster is a group of logically similar upstream hosts that Envoy connects to. Envoy discovers the members of a cluster via <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/service_discovery#arch-overview-service-discovery">service discovery</a>. <br><br>I used it to draw the images down below, to specify in which logical block the traffic is entering to depending on the direction (ingress or egress)</p><!--kg-card-begin: markdown--><h2 id="simple-istio-mesh-config">Simple Istio mesh config</h2>
<!--kg-card-end: markdown--><p>Before jumping into the details of the egress filtering, let&apos;s have an overview of the envoy configuration when your pods are deployed with an Istio proxy without any specific configuration.</p><p>Here, we have an <code>application-0</code> which expose a service on <code>8080</code> HTTP port and has 3 dependencies : <code>application-1 </code> , <code>application-2</code> located inside the mesh, and <code>bigtable</code> which is located outside of the mesh.</p><p>This is resulting in the proxy config as :</p><ul><li>No specific inbound listener</li><li>More than 2 outbound clusters : all services in the mesh are added in term of outbound configuration</li></ul><p>It means that every topology change updates the configuration to all proxy in the mesh, even if your application is not concerned. Envoy will then consume more CPU and memory to process this complete mesh topology.</p><figure class="kg-card kg-image-card kg-width-full"><img src="https://detiers.com/content/images/2024/01/istio-egress-Without-Sidecar-1.jpg" class="kg-image" alt loading="lazy" width="2000" height="1301" srcset="https://detiers.com/content/images/size/w600/2024/01/istio-egress-Without-Sidecar-1.jpg 600w, https://detiers.com/content/images/size/w1000/2024/01/istio-egress-Without-Sidecar-1.jpg 1000w, https://detiers.com/content/images/size/w1600/2024/01/istio-egress-Without-Sidecar-1.jpg 1600w, https://detiers.com/content/images/size/w2400/2024/01/istio-egress-Without-Sidecar-1.jpg 2400w"></figure><!--kg-card-begin: markdown--><h2 id="adding-mesh-internal-dependencies">Adding mesh internal dependencies</h2>
<!--kg-card-end: markdown--><p>To avoid this overload and most importantly regarding this article subject, to qualify our workload dependencies, we usually leverage the <a href="https://istio.io/latest/docs/reference/config/networking/sidecar/">Sidecar resource</a> which allows to fine tune ingress and egress traffic of our workload.</p><p>Let&apos;s start with only our mesh internal dependencies : <code>application-1 </code> and <code>application-2</code></p><p>To do that we define in the Sidecar resource :</p><ul><li>1 <code>ingress</code></li><li>2 <code>egress</code></li></ul><!--kg-card-begin: markdown--><pre><code class="language-yaml">apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
  name: &quot;release-name&quot;
spec:
  workloadSelector:
    labels:
      app: &quot;release-name&quot;
  outboundTrafficPolicy:
    mode: &quot;ALLOW_ANY&quot;
  ingress:
  - port:
      number:  8080
      protocol: TCP
      name: http
    defaultEndpoint: &quot;127.0.0.1:8080&quot;
  egress:
  - hosts:
    - namespace/application-1
    - namespace/application-2
</code></pre>
<!--kg-card-end: markdown--><p>This is resulting in the proxy config as :</p><ul><li>1 inbound listener : this is mainly used to have correct metrics. <strong>However at this point if you mistake the port number, you gonna break your inbound traffic.</strong></li><li>2 outbound clusters : so all other services in the mesh are ignored in terms of outbound configuration, but they&#x2019;re still reachable. </li></ul><p>The first benefit is a lower footprint : the configuration is now reduced as only a couple of dependencies. We&apos;re almost ready to leverage the filtering, but right now we&apos;re only using the <code>ALLOW_ANY</code> parameter, so nothing is really filtered out. Yes, but it may have an impact anyway.</p><h3 id="harmless-really">Harmless, really ?</h3><p>Indeed, the <code>application-3</code> is not a dependency declared in the Sidecar resource, so the internal <code>outboundCluster</code> <code>application-3</code> doesn&apos;t exist anymore. </p><p>If <code>application-0</code> must reach <code>application-3</code> (forgotten dependency for example) like I said before, the traffic is allowed, thanks to &#xA0;<code>outBoundTrafficPolicy</code> &#xA0;set to <code>ALLOW_ANY</code>.<br> <br>However, it may break if <code>application-3</code> have specific cluster configuration (VirtualService, DestinationRules...). Every traffic forwarded througth the <code>passThroughCluster</code>, is done without taking destination VS and DR configuration into consideration. So take extra precautions to really identify your dependencies before adding the Sidecar resource.</p><p>Regarding external services like <code>bigtable</code> , nothing changes : all traffic forwarded is still using the <code>passThroughCluster</code> . We must deal with this before actually performing the filtering.</p><figure class="kg-card kg-image-card kg-width-full"><img src="https://detiers.com/content/images/2024/01/istio-egress-With-Sidecar-2.jpg" class="kg-image" alt loading="lazy" width="2000" height="1426" srcset="https://detiers.com/content/images/size/w600/2024/01/istio-egress-With-Sidecar-2.jpg 600w, https://detiers.com/content/images/size/w1000/2024/01/istio-egress-With-Sidecar-2.jpg 1000w, https://detiers.com/content/images/size/w1600/2024/01/istio-egress-With-Sidecar-2.jpg 1600w, https://detiers.com/content/images/size/w2400/2024/01/istio-egress-With-Sidecar-2.jpg 2400w"></figure><!--kg-card-begin: markdown--><h2 id="adding-mesh-external-dependencies">Adding mesh external dependencies</h2>
<!--kg-card-end: markdown--><p>Now we&apos;ve defined all internal dependencies, let&apos;s add the external ones. &#xA0;Open your Sidecar resource in your favorite editor and add <code>bigtable</code> .</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
  name: &quot;release-name&quot;
spec:
  workloadSelector:
    labels:
      app: &quot;release-name&quot;
  outboundTrafficPolicy:
    mode: &quot;ALLOW_ANY&quot;
  ingress:
  - port:
      number:  8080
      protocol: TCP
      name: http
    defaultEndpoint: &quot;127.0.0.1:8080&quot;
  egress:
  - hosts:
    - namespace/application-1
    - namespace/application-2
    - ./bigtable.googleapis.com
</code></pre>
<!--kg-card-end: markdown--><p>and you create a <a href="https://istio.io/latest/docs/reference/config/networking/service-entry/">ServiceEntry resource</a> which specify :</p><!--kg-card-begin: markdown--><pre><code class="language-yaml">apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
  name: &quot;release-name&quot;
  namespace: istio-system
spec:
  exportTo:
  - &apos;.&apos;
  hosts:
  - bigtable.googleapis.com
  location: MESH_EXTERNAL
  ports:
  - name: https
    number: 443
    protocol: HTTPS
  resolution: DNS
</code></pre>
<!--kg-card-end: markdown--><p>This is resulting in the proxy config as :</p><p><strong>3</strong> outbound clusters. The 3rd one (corresponding to <code>bigtable.googleapis.com</code>) finally shows up as soon as you create the Service Entry resource.</p><p>Why not before ? Because Istio rely on it&apos;s service registry to build its cluster configuration. All Kubernetes services are by default populated in the Istio registry, but <code>bigtable</code> is not part of the mesh : that&apos;s why we create a Service Entry resource which creates an entry in the Istio Service Registry.</p><figure class="kg-card kg-image-card kg-width-full"><img src="https://detiers.com/content/images/2024/01/istio-egress-With-Sidecar-and-ext-dep-4.jpg" class="kg-image" alt loading="lazy" width="2000" height="1531" srcset="https://detiers.com/content/images/size/w600/2024/01/istio-egress-With-Sidecar-and-ext-dep-4.jpg 600w, https://detiers.com/content/images/size/w1000/2024/01/istio-egress-With-Sidecar-and-ext-dep-4.jpg 1000w, https://detiers.com/content/images/size/w1600/2024/01/istio-egress-With-Sidecar-and-ext-dep-4.jpg 1600w, https://detiers.com/content/images/size/w2400/2024/01/istio-egress-With-Sidecar-and-ext-dep-4.jpg 2400w"></figure><p>Now you can activate the <code>REGISTRY_ONLY</code> option, which only allow traffic from every logical cluster and forbid anything which used to go through the <code>passThroughCluster.</code></p>]]></content:encoded></item><item><title><![CDATA[Using flock to wait for a lock file to be released]]></title><description><![CDATA[<p>I love flock. </p><p>I frequently use it to wait something like 10 sec for apt to finish its unattended upgrades and release the apt lock, like this :</p><!--kg-card-begin: markdown--><pre><code class="language-bash">(
flock -w 10 9 || exit 1
aptly repo remove $repo $files
) 9&gt;/var/lock/aptlock
</code></pre>
<!--kg-card-end: markdown-->]]></description><link>https://detiers.com/waiting-for-apt-to-finish/</link><guid isPermaLink="false">5e41501251573500012797b1</guid><dc:creator><![CDATA[FG]]></dc:creator><pubDate>Wed, 27 Dec 2023 14:20:03 GMT</pubDate><content:encoded><![CDATA[<p>I love flock. </p><p>I frequently use it to wait something like 10 sec for apt to finish its unattended upgrades and release the apt lock, like this :</p><!--kg-card-begin: markdown--><pre><code class="language-bash">(
flock -w 10 9 || exit 1
aptly repo remove $repo $files
) 9&gt;/var/lock/aptlock
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Use rsync over SSH]]></title><description><![CDATA[<p>I always forget this one...</p><!--kg-card-begin: markdown--><pre><code class="language-bash">rsync -avz -e &apos;ssh -p 2223&apos; &lt;origin&gt; &lt;destination&gt;
</code></pre>
<!--kg-card-end: markdown-->]]></description><link>https://detiers.com/use-rsync-over-ssh/</link><guid isPermaLink="false">658c2e1c2c5fe0000175164b</guid><dc:creator><![CDATA[FG]]></dc:creator><pubDate>Wed, 27 Dec 2023 14:01:40 GMT</pubDate><content:encoded><![CDATA[<p>I always forget this one...</p><!--kg-card-begin: markdown--><pre><code class="language-bash">rsync -avz -e &apos;ssh -p 2223&apos; &lt;origin&gt; &lt;destination&gt;
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Cleanup APTLY repo]]></title><description><![CDATA[<p>After joining my new team, I&apos;ve seen a huge number of packages in development&apos;s &#xA0;APT repo. We use <a href="https://www.aptly.info">APTLY</a> to manage our repo, and unfortunately it doesn&apos;t provide a way to expire or keep a number of package release. So, I wrote this</p>]]></description><link>https://detiers.com/cleanup-aptly-repo/</link><guid isPermaLink="false">5e0b323c603b49000176989e</guid><category><![CDATA[Linux]]></category><category><![CDATA[APT]]></category><dc:creator><![CDATA[FG]]></dc:creator><pubDate>Sat, 11 May 2019 08:41:00 GMT</pubDate><content:encoded><![CDATA[<p>After joining my new team, I&apos;ve seen a huge number of packages in development&apos;s &#xA0;APT repo. We use <a href="https://www.aptly.info">APTLY</a> to manage our repo, and unfortunately it doesn&apos;t provide a way to expire or keep a number of package release. So, I wrote this python script to clean up the repo.</p><p>By default, it keeps 20 versions but it can be overridden. It manage APTLY package query : so it could be run against a single package or a set of packages.</p><h2 id="simple-package-query-example">Simple package query example</h2><!--kg-card-begin: markdown--><pre><code class="language-bash"># clean-repo.py --repo buster-dev --package-query vault-server --keep 2 --dry-run
Run in dry mode, without actually deleting the packages.
Remove &quot;vault-server&quot; from buster-dev and keep the last 2 packages.

This package(s) would be kept:
vault-server_1:0.11.4~20190424~buster.build0_amd64
vault-server_1:0.11.4~20190425~buster.build0_amd64
</code></pre>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><pre><code class="language-bash"># clean-repo.py --repo buster-dev --package-query vault-server --keep 1 --dry-run
Run in dry mode, without actually deleting the packages.
Remove &quot;vault-server&quot; from buster-dev and keep the last 1 packages.

This package(s) would be kept:
vault-server_1:0.11.4~20190425~buster.build0_amd64

This package(s) would be deleted:
vault-server_1:0.11.4~20190424~buster.build0_amd64
</code></pre>
<!--kg-card-end: markdown--><h2 id="multiple-package-query-example">Multiple package query example </h2><!--kg-card-begin: markdown--><pre><code class="language-bash"># clean-repo.py --repo buster-dev --package-query &apos;Name (% vault-*)&apos; --keep 1 --dry-run
Run in dry mode, without actually deleting the packages.
Remove &quot;Name (% vault-*)&quot; from buster-dev and keep the last 1 packages.

This package(s) would be kept:
vault-common_1:0.11.4~20190425~buster.build0_amd64
vault-server_1:0.11.4~20190425~buster.build0_amd64

This package(s) would be deleted:
vault-common_1:0.11.4~20190424~buster.build0_amd64
vault-server_1:0.11.4~20190424~buster.build0_amd64
</code></pre>
<!--kg-card-end: markdown--><h2 id="full-script">Full script</h2><!--kg-card-begin: html--><script src="https://gist.github.com/frgaudet/1e19df820ebc9c75c6463e9ef27f4d12.js"></script><!--kg-card-end: html--><p>Use your favorite scheduler (rundeck, cron...) to run this script.</p>]]></content:encoded></item><item><title><![CDATA[2FA SSH authentication for your server]]></title><description><![CDATA[<h2 id="install-google-authenticator-package">Install google-authenticator package</h2><!--kg-card-begin: markdown--><p><code>sudo apt-get install libpam-google-authenticator</code></p>
<!--kg-card-end: markdown--><h3 id="configure-google-authenticator">Configure google-authenticator</h3><p>Run google-authenticator as the user you want to be 2FA&apos;s authenticated and answer a few questions. </p><p>Shall this tool update your configuration file ? Answer yes to this first question.</p><p>For max security :</p><ul><li>Restrict the use of a token by</li></ul>]]></description><link>https://detiers.com/2fa_ssh/</link><guid isPermaLink="false">5e0b323c603b490001769899</guid><dc:creator><![CDATA[FG]]></dc:creator><pubDate>Thu, 14 Feb 2019 07:25:15 GMT</pubDate><content:encoded><![CDATA[<h2 id="install-google-authenticator-package">Install google-authenticator package</h2><!--kg-card-begin: markdown--><p><code>sudo apt-get install libpam-google-authenticator</code></p>
<!--kg-card-end: markdown--><h3 id="configure-google-authenticator">Configure google-authenticator</h3><p>Run google-authenticator as the user you want to be 2FA&apos;s authenticated and answer a few questions. </p><p>Shall this tool update your configuration file ? Answer yes to this first question.</p><p>For max security :</p><ul><li>Restrict the use of a token by waiting between login</li><li>Token Time Window : 30 sec.</li><li>Attempts numbers : 3</li></ul><p>The last one helps to prevent brute-force login attacks. </p><p>Scan the QRCode (or enter key code) with your favorite smartphone. &#xA0;</p><h3 id="make-your-ssh-config">Make your SSH config</h3><p>Open /etc/ssh/sshd_config and make sure to have this gobal configuration settings :</p><!--kg-card-begin: markdown--><p><code>UsePAM yes</code><br>
<code>ChallengeResponseAuthentication yes</code></p>
<!--kg-card-end: markdown--><p>While you can use the following parameters gobally, I personnaly prefer having a per-user config :</p><!--kg-card-begin: markdown--><pre><code class="language-bash">Match User fred
     AuthenticationMethods publickey,password-interactive
</code></pre>
<!--kg-card-end: markdown--><h3 id="make-your-pam-config">Make your PAM config</h3><p>Open /etc/pam.d/sshd with your favorite editor :</p><!--kg-card-begin: markdown--><pre><code class="language-bash">sudo vim /etc/pam.d/sshd
</code></pre>
<!--kg-card-end: markdown--><p>Add before &quot;@include common-auth&quot; section the following line</p><!--kg-card-begin: markdown--><pre><code class="language-bash">auth required  pam_google_authenticator.so
</code></pre>
<!--kg-card-end: markdown--><p>Restart your SSH server. </p><h2 id="test">Test</h2><p><strong>For your own safety, keep your current SSH session opened</strong> and from another window open a new SSH session to test your new 2FA authentication.</p><h2 id="alternative">Alternative</h2><p>If you want to get rid of your password prompt and just rely on your SSH key + your OTP then adjust your SSH pam config like this :</p><!--kg-card-begin: markdown--><pre><code class="language-bash"># Standard Un*x authentication.
#@include common-auth
auth [success=1 default=ignore]   pam_google_authenticator.so
auth    requisite           pam_deny.so
auth    required            pam_permit.so
</code></pre>
<!--kg-card-end: markdown--><p>Comment the default common-auth entry which is the section which actually prompt for a password and report the two new &apos;auth&apos; lines.</p>]]></content:encoded></item><item><title><![CDATA[TP MMI]]></title><description><![CDATA[<p>Voici les TP que j&apos;ai dispens&#xE9; durant quelques ann&#xE9;es aux &#xE9;tudiants de MMI (M&#xE9;tiers du Multim&#xE9;dia et Internet) &#xE0; l&apos;<a href="https://www.uca.fr">Universit&#xE9; Clermont Auvergne</a>.</p><p>Ces TP utilisent une VM LinuxMint (what else ?!) dont vous pouvez t&#xE9;l&#xE9;</p>]]></description><link>https://detiers.com/tp-mmi/</link><guid isPermaLink="false">5e0b323c603b49000176989b</guid><dc:creator><![CDATA[FG]]></dc:creator><pubDate>Mon, 31 Dec 2018 09:15:07 GMT</pubDate><content:encoded><![CDATA[<p>Voici les TP que j&apos;ai dispens&#xE9; durant quelques ann&#xE9;es aux &#xE9;tudiants de MMI (M&#xE9;tiers du Multim&#xE9;dia et Internet) &#xE0; l&apos;<a href="https://www.uca.fr">Universit&#xE9; Clermont Auvergne</a>.</p><p>Ces TP utilisent une VM LinuxMint (what else ?!) dont vous pouvez t&#xE9;l&#xE9;charger l&apos;ISO customis&#xE9; <a href="https://files.detiers.com/ISO_VM/linuxmint-17.1-KDE-amd64-IUT-VM_build201505211653.iso">ici</a>. </p><h2 id="tp-r-seaux-mmi-premi-re-ann-e-semestre-1">TP R&#xE9;seaux, MMI, premi&#xE8;re ann&#xE9;e, semestre 1</h2><h3 id="tp1-calculs-d-espaces-d-adressage-ip">TP1 : calculs d&apos;espaces d&apos;adressage IP</h3><p><a href="https://files.detiers.com/public/MMI/S1-TP-RSX1/20141010-TP_RSX_1.pdf">TP R&#xE9;seau 1</a></p><p><a href="https://files.detiers.com/public/MMI/S1-TP-RSX1/20141006-TP_RSX_1_corrige.pdf">TP r&#xE9;seau 1 corrig&#xE9;</a></p><h3 id="tp-2-prise-en-main-de-wireshark-premi-res-captures-et-interpr-tations">TP 2 : Prise en main de <a href="https://www.wireshark.org">Wireshark</a>, premi&#xE8;res captures et interpr&#xE9;tations</h3><p><a href="https://files.detiers.com/public/MMI/S1-TP-RSX2/20141117-TP_RSX_2.pdf">TP R&#xE9;seau 2</a></p><p><a href="https://files.detiers.com/public/MMI/S1-TP-RSX2/20141117-TP_RSX_2_corrige.pdf">TP r&#xE9;seau 2 corrig&#xE9;</a></p><h3 id="tp-3-ce-tp-a-pour-objectif-de-mettre-en-pratique-la-capture-de-trame-ethernet-et-de-mettre-en-vidence-la-fragmentation-de-datagrammes-">TP 3 : Ce TP a pour objectif de mettre en pratique la capture de trame ethernet, et de mettre en &#xE9;vidence la fragmentation de datagrammes.</h3><p><a href="https://files.detiers.com/public/MMI/S1-TP-RSX3/20141205-TP_RSX_3.pdf">TP R&#xE9;seau 3</a></p><p><a href="https://files.detiers.com/public/MMI/S1-TP-RSX3/20141205-TP_RSX_3_corrige.pdf">TP R&#xE9;seau 3 corrig&#xE9;</a></p><h2 id="tp-syst-me-d-exploitation-mmi-premi-re-ann-e-semestre-1">TP Syst&#xE8;me d&apos;exploitation, MMI, premi&#xE8;re ann&#xE9;e, semestre 1</h2><h3 id="tp1-ce-tp-est-destin-vous-familiariser-avec-la-notion-de-virtualisation-et-la-manipulation-d-un-syst-me-d-exploitation-">TP1 : Ce TP est destin&#xE9; &#xE0; vous familiariser avec la notion de virtualisation, et la manipulation d&apos;un syst&#xE8;me d&apos;exploitation.</h3><p><a href="https://files.detiers.com/public/MMI/S1-TP-SE1/20150727-TP_SE_1.pdf">TP Syst&#xE8;me d&apos;exploitation 1 </a></p><h3 id="tp2-ce-tp-a-pour-objectif-la-prise-en-main-d-une-distribution-gnu-linux-linux-mint-17-l-interface-graphique-est-relativement-similaire-ce-qui-existe-sous-windows-ou-macosx-nous-allons-donc-nous-int-resser-aux-sp-cificit-s-de-cet-os-et-pour-commencer-le-shell-">TP2 : Ce TP a pour objectif la prise en main d&#x2019;une distribution GNU/Linux, <a href="https://linuxmint.com">Linux Mint</a> 17. L&apos;interface graphique est relativement similaire &#xE0; ce qui existe sous Windows ou MacOSX. Nous allons donc nous int&#xE9;resser aux sp&#xE9;cificit&#xE9;s de cet OS. Et pour commencer : le shell.</h3><p><a href="https://files.detiers.com/public/MMI/S1-TP-SE2/20150727-TP_SE_2.pdf">TP Syst&#xE8;me d&apos;exploitation 2</a></p><p><a href="https://files.detiers.com/public/MMI/S1-TP-SE2/20141007-TP_SE_2_corrige.pdf">TP Syst&#xE8;me d&apos;exploitation 2 corrig&#xE9;</a></p><h3 id="tp3-ce-tp-vous-propose-de-r-diger-quelques-scripts-en-bash-">TP3 : Ce TP vous propose de r&#xE9;diger quelques scripts en bash.</h3><p>TP Syst&#xE8;me d&apos;exploitation <a href="https://files.detiers.com/public/MMI/S1-TP-SE3/20150727-TP_SE_3.pdf">3</a></p><p><a href="https://files.detiers.com/public/MMI/S1-TP-SE2/20141007-TP_SE_2_corrige.pdf">TP Syst&#xE8;me d&apos;exploitation 3 corrig&#xE9;</a></p><h3 id="tp4-ce-tp-aborde-la-gestion-des-comptes-utilisateurs-sous-linux-introduit-la-notion-de-privil-ge-et-par-cons-quent-la-notion-de-s-curit-ce-tp-fait-beaucoup-appel-aux-commandes-vues-pr-c-demment-">TP4 : Ce TP aborde la gestion des comptes utilisateurs sous linux, introduit la notion de privil&#xE8;ge et par cons&#xE9;quent la notion de s&#xE9;curit&#xE9;. Ce TP fait beaucoup appel aux commandes vues pr&#xE9;c&#xE9;demment.</h3><p><a href="https://files.detiers.com/public/MMI/S1-TP-SE4/20150727-TP_SE_4.pdf">TP Syst&#xE8;me d&apos;exploitation 4</a></p><h3 id="tp5-ce-tp-aborde-les-outils-n-cessaires-afin-de-g-rer-les-logiciels-install-s-sur-linux-">TP5 : Ce TP aborde les outils n&#xE9;cessaires afin de g&#xE9;rer les logiciels install&#xE9;s sur Linux.</h3><p><a href="https://files.detiers.com/public/MMI/S1-TP-SE5/20150727-TP_SE_5.pdf">TP Syst&#xE8;me d&apos;exploitation 5</a></p><h2 id="tp-services-sur-r-seaux-mmi-premi-re-ann-e-semestre-2">TP Services sur r&#xE9;seaux, MMI, premi&#xE8;re ann&#xE9;e, semestre 2</h2><h3 id="tp1-d-couverte-de-packet-tracer-outil-de-simulation-de-r-seaux-">TP1 : D&#xE9;couverte de <a href="https://www.netacad.com/fr/courses/packet-tracer">Packet Tracer</a> : outil de simulation de r&#xE9;seaux.</h3><p><a href="https://files.detiers.com/public/MMI/S2-TP-SSR1/S2-TP-SSR1.pdf">TP Service Sur R&#xE9;seau 1</a></p><p><a href="https://files.detiers.com/public/MMI/S2-TP-SSR1/S2-TP-SSR1_corrige.pdf">TP Service Sur R&#xE9;seau 1 corrig&#xE9;</a></p><h3 id="tp2-ce-tp-a-pour-objectif-de-mettre-en-application-les-vlan-et-d-introduire-la-notion-de-s-curit-d-acc-s-au-r-seau-physique-par-les-m-canismes-de-limitation-de-propagation-de-vlan-et-de-s-curit-li-au-port-">TP2 : Ce TP a pour objectif de mettre en application les VLAN et d&#x2019;introduire la notion de s&#xE9;curit&#xE9; d&#x2019;acc&#xE8;s au r&#xE9;seau physique par les m&#xE9;canismes de limitation de propagation de VLAN et de s&#xE9;curit&#xE9; li&#xE9; au port.</h3><p><a href="https://files.detiers.com/public/MMI/S2-TP-SSR2/S2-TP-SSR2.pdf">TP Service Sur R&#xE9;seau 2</a></p><p><a href="https://files.detiers.com/public/MMI/S2-TP-SSR2/S2-TP-SSR2_corrige.pdf">TP Service Sur R&#xE9;seau 2 corrig&#xE9;</a></p><h3 id="tp3-ce-tp-propose-d-tudier-le-protocole-spanning-tree-sa-bonne-compr-hension-est-indispensable-afin-d-assurer-la-stabilit-d-un-r-seau-local-">TP3 : Ce TP propose d&#x2019;&#xE9;tudier le protocole spanning tree. Sa bonne compr&#xE9;hension est indispensable afin d&#x2019;assurer la stabilit&#xE9; d&#x2019;un r&#xE9;seau local.</h3><p><a href="https://files.detiers.com/public/MMI/S2-TP-SSR3/S2-TP-SSR3.pdf">TP Service Sur R&#xE9;seau 3</a></p><h3 id="tp4-ce-tp-propose-la-mise-en-oeuvre-de-routage-inter-vlan-de-routage-statique-et-pr-sente-le-principe-de-l-agr-gation-de-routes-">TP4 : Ce TP propose la mise en oeuvre de routage inter-VLAN, de routage statique et pr&#xE9;sente le principe de l&#x2019;agr&#xE9;gation de routes.</h3><p><a href="https://files.detiers.com/public/MMI/S2-TP-SSR4/S2-TP-SSR4.pdf">TP Service Sur R&#xE9;seau 4</a></p><p><a href="https://files.detiers.com/public/MMI/S2-TP-SSR4/S2-TP-SSR4_corrige.pdf">TP Service Sur R&#xE9;seau 4 corrig&#xE9;</a></p><h3 id="tp5-ce-tp-propose-la-mise-en-oeuvre-de-filtrage-le-r-seau-que-nous-allons-tudier-est-constitu-d-une-zone-priv-e-que-nous-pouvons-comparer-au-r-seau-d-une-petite-entreprise-un-isp-fourni-un-routeur-sur-lequel-sont-branch-s-les-quipements-du-client-">TP5 : Ce TP propose la mise en oeuvre de filtrage. Le r&#xE9;seau que nous allons &#xE9;tudier est constitu&#xE9; d&#x2019;une zone priv&#xE9;e, que nous pouvons comparer au r&#xE9;seau d&#x2019;une petite entreprise. Un ISP fourni un routeur, sur lequel sont branch&#xE9;s les &#xE9;quipements du client.</h3><p><a href="https://files.detiers.com/public/MMI/S2-TP-SSR5/S2-TP-SSR5.pdf">TP Service Sur R&#xE9;seau 5</a></p><p><a href="https://files.detiers.com/public/MMI/S2-TP-SSR5/S2-TP-SSR5_corrige.pdf">TP Service Sur R&#xE9;seau 5 corrig&#xE9;</a></p><h2 id="tp-services-sur-r-seaux-mmi-seconde-ann-e-semestre-1">TP Services sur r&#xE9;seaux, MMI, seconde ann&#xE9;e, semestre 1</h2><h3 id="tp1-ce-tp-a-pour-objectif-la-prise-en-main-du-shell-l-interpr-teur-de-commande-des-distributions-gnu-linux-la-ma-trise-de-cet-outil-est-indispensable-pour-bien-aborder-ensuite-l-installation-et-la-configuration-des-composants-de-type-serveur-web-serveur-de-messagerie-etc-">TP1 : Ce TP a pour objectif la prise en main du shell, l&apos;interpr&#xE9;teur de commande des distributions GNU/Linux. La ma&#xEE;trise de cet outil est indispensable pour bien aborder ensuite l&apos;installation et la configuration des composants de type serveur web, serveur de messagerie, etc.</h3><p><a href="https://files.detiers.com/public/MMI/S3-TP-SSR1/20141208-TP_SSR_1.pdf">TP Service Sur R&#xE9;seau 1</a></p><p><a href="https://files.detiers.com/public/MMI/S3-TP-SSR1/20141208-TP_SSR_1_corrige.pdf">TP Service Sur R&#xE9;seau 1 corrig&#xE9;</a></p><h3 id="tp2-ce-tp-aborde-la-gestion-des-comptes-utilisateurs-sous-linux-introduit-la-notion-de-privil-ge-et-par-cons-quent-la-notion-de-s-curit-ce-tp-fait-beaucoup-appel-aux-commandes-vues-pr-c-demment-">TP2 : Ce TP aborde la gestion des comptes utilisateurs sous linux, introduit la notion de privil&#xE8;ge et par cons&#xE9;quent la notion de s&#xE9;curit&#xE9;. Ce TP fait beaucoup appel aux commandes vues pr&#xE9;c&#xE9;demment.</h3><p><a href="https://files.detiers.com/public/MMI/S3-TP-SSR2/20141209-TP_SSR_2.pdf">TP Service Sur R&#xE9;seau 2</a></p><p><a href="https://files.detiers.com/public/MMI/S3-TP-SSR2/20141209-TP_SSR_2_corrige.pdf">TP Service Sur R&#xE9;seau 2 corrig&#xE9;</a></p><h3 id="tp3-ce-tp-permet-d-installer-un-dns-bind-sur-un-syst-me-linux-savoir-linux-mint-">TP3 : Ce TP permet d&apos;installer un DNS (bind) sur un syst&#xE8;me Linux, &#xE0; savoir Linux Mint.</h3><p><a href="https://files.detiers.com/public/MMI/S3-TP-SSR3/20141212-TP_SSR_3.pdf">TP Service Sur R&#xE9;seau 3</a></p><p><a href="https://files.detiers.com/public/MMI/S3-TP-SSR3/20141212-TP_SSR_3_corrige.pdf">TP Service Sur R&#xE9;seau 3 corrig&#xE9;</a></p><h3 id="l-objectif-des-tp-suivants-est-d-installer-un-serveur-web-lamp-linux-apache-mysql-php-">L&apos;objectif des TP suivants est d&apos;installer un serveur web LAMP (Linux / Apache / MySQL / PHP ).</h3><p></p><h3 id="tp5-installation-d-apache">TP5 : Installation d&apos;Apache</h3><p><a href="https://files.detiers.com/public/MMI/S3-TP-SSR4/20150113-TP_SSR_4.pdf">TP Service Sur R&#xE9;seau </a>5</p><p><a href="https://files.detiers.com/public/MMI/S3-TP-SSR4/20150113-TP_SSR_4_corrige.pdf">TP Service Sur R&#xE9;seau 5 corrig&#xE9;</a></p><h3 id="tp6-configuration-avanc-e-d-apache-">TP6 : Configuration avanc&#xE9;e d&apos;Apache.</h3><p><a href="https://files.detiers.com/public/MMI/S3-TP-SSR6/20150126-TP_SSR_6.pdf">TP Service Sur R&#xE9;seau 6</a></p><p><a href="https://files.detiers.com/public/MMI/S3-TP-SSR6/20150126-TP_SSR_6_corrige.pdf">TP Service Sur R&#xE9;seau 6 corrig&#xE9;</a></p><h3 id="tp7-finalisation-mysql-et-php">TP7 : Finalisation : mysql et PHP</h3><p><a href="https://files.detiers.com/public/MMI/S3-TP-SSR7/20150128-TP_SSR_7.pdf">TP Service Sur R&#xE9;seau 7</a></p><p><a href="https://files.detiers.com/public/MMI/S3-TP-SSR7/20150128-TP_SSR_7_corrige.pdf">TP Service Sur R&#xE9;seau 7 corrig&#xE9;</a></p>]]></content:encoded></item><item><title><![CDATA[Update all of your GIT repo with one line]]></title><description><![CDATA[<p>I have so many GIT repo that I wanted to be able to update them all, easily. With just a one line command I could alias.</p><p>Simple version could be like this :</p><!--kg-card-begin: markdown--><pre><code class="language-bash">find . -type d -maxdepth 1 -name &quot;??*&quot; -exec sh -c &quot;cd &apos;{}&apos; &amp;&amp; git</code></pre>]]></description><link>https://detiers.com/untitled-2-2/</link><guid isPermaLink="false">5e41501251573500012797ac</guid><dc:creator><![CDATA[FG]]></dc:creator><pubDate>Tue, 04 Dec 2018 10:04:24 GMT</pubDate><content:encoded><![CDATA[<p>I have so many GIT repo that I wanted to be able to update them all, easily. With just a one line command I could alias.</p><p>Simple version could be like this :</p><!--kg-card-begin: markdown--><pre><code class="language-bash">find . -type d -maxdepth 1 -name &quot;??*&quot; -exec sh -c &quot;cd &apos;{}&apos; &amp;&amp; git pull; cd ..  &quot; \;
</code></pre>
<!--kg-card-end: markdown--><p>The idea is simple, it parse every folder down your current one, and execute `git &#xA0;pull` inside, then continue to the next folder.</p><p>I&apos;m lazy ... I certainly don&apos;t want to type this long command every time, so I added an alias. Add this line to your .bash_profile (or .bashrc whatever).</p><p>This is a little bit more complex version which avoid digging into .git folder and write useful feedback on your terminal screen.</p><!--kg-card-begin: markdown--><pre><code class="language-bash">alias gla=&apos;find . -type d -maxdepth 4 -not -path &quot;*/.git*&quot; -name &quot;??*&quot;  -exec bash -c &quot;cd \&quot;{}\&quot;; [ -d .git ] &amp;&amp; ( echo ------------ Updating {} ; git pull --all) ; cd .. ; &quot; \;&apos;
</code></pre>
<!--kg-card-end: markdown--><p>Reload your profile :</p><!--kg-card-begin: markdown--><pre><code class="language-bash">source ~/.bash_profile
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Use your touchid for sudo]]></title><description><![CDATA[<p>The script below add the one line configuration if it hasn&apos;t been already done</p><!--kg-card-begin: html--><script src="https://gist.github.com/frgaudet/fcb1375f93d04ef083a96038f76f2352.js"></script><!--kg-card-end: html--><p>Source : <a href="http://www.unixfu.ch/how-to-authenticate-sudo-with-touchid/">http://www.unixfu.ch/how-to-authenticate-sudo-with-touchid</a></p>]]></description><link>https://detiers.com/use-your-touchid-for-sudo/</link><guid isPermaLink="false">5e0b323c603b490001769898</guid><dc:creator><![CDATA[FG]]></dc:creator><pubDate>Wed, 03 Oct 2018 08:06:00 GMT</pubDate><content:encoded><![CDATA[<p>The script below add the one line configuration if it hasn&apos;t been already done</p><!--kg-card-begin: html--><script src="https://gist.github.com/frgaudet/fcb1375f93d04ef083a96038f76f2352.js"></script><!--kg-card-end: html--><p>Source : <a href="http://www.unixfu.ch/how-to-authenticate-sudo-with-touchid/">http://www.unixfu.ch/how-to-authenticate-sudo-with-touchid</a></p>]]></content:encoded></item><item><title><![CDATA[Replace (one or more!) ceph journal disk]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Simple script used to replace journal disk. I regenerate journal UUID for convenience but you can also keep as it is.</p>
<p>The layout is the following :<br>
disk /dev/sda : 5 journal partitions -&gt; 5 OSD (OSD.40 to OSD.44)<br>
disk /dev/sdb : 5 journal partitions -&gt; 5</p>]]></description><link>https://detiers.com/replace-ceph-journal/</link><guid isPermaLink="false">5e0b323c603b490001769897</guid><dc:creator><![CDATA[FG]]></dc:creator><pubDate>Fri, 10 Mar 2017 21:05:20 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Simple script used to replace journal disk. I regenerate journal UUID for convenience but you can also keep as it is.</p>
<p>The layout is the following :<br>
disk /dev/sda : 5 journal partitions -&gt; 5 OSD (OSD.40 to OSD.44)<br>
disk /dev/sdb : 5 journal partitions -&gt; 5 OSD (OSD.45 to OSD.49)<br>
disk /dev/sdc : 5 journal partitions -&gt; 5 OSD (OSD.50 to OSD.59)</p>
<p>First, set the noout flag to prevent any data migration.</p>
<pre><code class="language-bash">ceph osd set noout
</code></pre>
<p>Then stop each OSD which are related to the first journal disk.</p>
<pre><code class="language-bash">#!/bin/bash

partitions=&quot;1 2 3 4 5&quot;
num=40
for partition in $partitions
do
systemctl stop ceph-osd@$num.service
ceph-osd -i $num --flush-journal
((num++))
done
</code></pre>
<p>Hotswap the first journal disk, then execute the following script :</p>
<pre><code class="language-bash">#!/bin/bash

partitions=&quot;1 2 3 4 5&quot;
# First OSD number
num=40
# Disk
disk=sda
for partition in $partitions
do
journal_uuid=$(uuidgen)
sgdisk --new=$partition:0:+5120M --change-name=$partition:&apos;ceph journal&apos; --partition-guid=$partition:$journal_uuid --typecode=$partition:$journal_uuid --mbrtogpt -- /dev/$disk
partprobe /dev/$disk

mv /var/lib/ceph/osd/ceph-$num/journal /var/lib/ceph/osd/ceph-$num/journal.old
ln -s /dev/disk/by-partuuid/$journal_uuid /var/lib/ceph/osd/ceph-$num/journal

mv /var/lib/ceph/osd/ceph-$num/journal_uuid /var/lib/ceph/osd/ceph-$num/journal_uuid.old
echo $journal_uuid &gt; /var/lib/ceph/osd/ceph-$num/journal_uuid

chown -R ceph. /var/lib/ceph/osd/ceph-$num/journal
chown ceph. /var/lib/ceph/osd/ceph-$num/journal_uuid

ceph-osd -i $num --mkjournal
chown ceph. /dev/$disk$partition
systemctl start ceph-osd@$num.service
((num++))
done
</code></pre>
<p>Process if necessary with <code>sdb</code> and <code>sdc</code> and adjust the OSD start number in the script.</p>
<p>Last step : unset the noout flag</p>
<pre><code class="language-bash">ceph osd unset noout
</code></pre>
<p>Source file :<br>
<a href="https://github.com/frgaudet/replace-ceph-journal">Github link</a></p>
<p>Reference:<br>
<a href="https://www.sebastien-han.fr/blog/2014/11/27/ceph-recover-osds-after-ssd-journal-failure/">S&#xE9;bastien Han blog</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[CSF and OpenVPN]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h1 id="issue">Issue</h1>
<p>Out-of-the-box, CSF + OpenVPN don&apos;t work together. Well, not with further configuration.</p>
<p>Let&apos;s say your OpenVPN conf is working, but you just can&apos;t get out on the internet through your VPN box.</p>
<p>Seems we need some masquerade and maybe other things..</p>
<h1 id="configuration">Configuration</h1>
<p>Here is</p>]]></description><link>https://detiers.com/csf-and-openvpn/</link><guid isPermaLink="false">5e0b323c603b490001769896</guid><dc:creator><![CDATA[FG]]></dc:creator><pubDate>Wed, 29 Jun 2016 13:53:34 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h1 id="issue">Issue</h1>
<p>Out-of-the-box, CSF + OpenVPN don&apos;t work together. Well, not with further configuration.</p>
<p>Let&apos;s say your OpenVPN conf is working, but you just can&apos;t get out on the internet through your VPN box.</p>
<p>Seems we need some masquerade and maybe other things..</p>
<h1 id="configuration">Configuration</h1>
<p>Here is my conf :</p>
<p>eth0 : external nic<br>
10.8.0.0/24 : my tunnel network</p>
<h1 id="solution">Solution</h1>
<p>Don&apos;t forget to enable packet forwarding :<br>
<code>echo 1 &gt; /proc/sys/net/ipv4/ip_forward</code></p>
<p>Edit your sysctl file to make your change permanent.</p>
<p>Then create the following file in your csf folder :</p>
<p><code>vi /etc/csf/csfpost.sh</code></p>
<p>Enter the following :</p>
<p><code>iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT</code></p>
<p>Relaunch csf :</p>
<p><code>csf -r</code></p>
<p>Enjoy. De nada.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Bonding]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>A bonding configuration allow you to increase your bandwidth by merging several physical interfaces to a virtual one. LACP is a well known aggregation protocol, available in most switches and in Linux. This post will show you how to use LACP between a switch and a Linux server.</p>
<h1 id="linux">Linux</h1>
<h2 id="loadthekernelmodule">Load</h2>]]></description><link>https://detiers.com/bonding/</link><guid isPermaLink="false">5e0b323c603b490001769895</guid><dc:creator><![CDATA[FG]]></dc:creator><pubDate>Thu, 31 Mar 2016 15:01:43 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>A bonding configuration allow you to increase your bandwidth by merging several physical interfaces to a virtual one. LACP is a well known aggregation protocol, available in most switches and in Linux. This post will show you how to use LACP between a switch and a Linux server.</p>
<h1 id="linux">Linux</h1>
<h2 id="loadthekernelmodule">Load the kernel module</h2>
<pre><code>cat &gt; /etc/modprobe.d/bond.conf &lt;&lt; EOF
alias bond0 bonding
options bond0 miimon=100 mode=4 lacp_rate=1
EOF
</code></pre>
<h2 id="disablenetworkmanager">Disable NetworkManager</h2>
<p>Then, you need to disable NetworkManager.</p>
<pre><code>systemctl stop NetworkManager.service
systemctl disable NetworkManager.service
</code></pre>
<h2 id="configureyourphysicalinterfaces">Configure your physical interfaces</h2>
<p>Choose your 2 (or more) network interfaces you want to join together. Their configuration must be the same : ethernet type (TX versus LX), speed, duplex, etc...</p>
<pre><code>cat &gt; /etc/sysconfig/network-scripts/ifcfg-p1p1 &lt;&lt; EOF
BOOTPROTO=&quot;none&quot;
NM_CONTROLLED=no
DEVICE=&quot;p1p1&quot;
NAME=p1p1
ONBOOT=yes
SLAVE=yes
MASTER=bond0
EOF
</code></pre>
<p>Second interface :</p>
<pre><code>cat &gt; /etc/sysconfig/network-scripts/ifcfg-p1p2 &lt;&lt; EOF
BOOTPROTO=&quot;none&quot;
NM_CONTROLLED=no
DEVICE=&quot;p1p2&quot;
NAME=p1p2
ONBOOT=yes
SLAVE=yes
MASTER=bond0
EOF
</code></pre>
<h2 id="createyourvirtualinterface">Create your virtual interface</h2>
<p>Now you can add a new interface <code>bond0</code>, which would be your virtual network interface :</p>
<pre><code>cat &gt; /etc/sysconfig/network-scripts/ifcfg-bond0 &lt;&lt; EOF
BOOTPROTO=&quot;none&quot;
DEVICE=&quot;bond0&quot;
NAME=bond0
IPADDR=192.168.0.45
PREFIX=24
DNS1=192.168.0.2
GATEWAY=192.168.0.1
DOMAIN=local
DEFROUTE=yes
ONBOOT=yes
PEERDNS=yes
PEERROUTES=yes
USERCTL=no
EOF
</code></pre>
<h1 id="switchpart">Switch part</h1>
<p>Let&apos;s now configure the switch. That&apos;s pretty simple. Just don&apos;t forget to apply any further configuration to the port channel interface, not the underlying physicals ones. If the configuration is different between 2 channel group member, then the port channel will be disabled and a traffic interruption will occur.</p>
<p>Also, configure the port channel using <code>active</code> mode. Which means the switch tries to &quot;negotiate&quot; the channel. Other configuration options are <code>passive</code> (wait for LACP negotiation) or <code>on</code> (unconditionnaly build the port channel).</p>
<p>On Cisco switches you could add up to 8 members to your channel group.</p>
<h2 id="ciscoios">Cisco IOS</h2>
<pre><code>interface GigabitEthernet2
description SERVER1_p1p1
channel-group 1 mode active
!
interface GigabitEthernet3
description SERVER1_p1p2
channel-group 1 mode active
!
interface Port-channel1
description SERVER1`
!
</code></pre>
<h2 id="dell">Dell</h2>
<p>The Dell syntax is the same as Cisco IOS.</p>
<pre><code>interface Te1/0/1
channel-group 1 mode active
description &quot;SERVER1_p1p1&quot;
!
interface Te1/0/2
channel-group 1 mode active
description &quot;SERVER1_p1p2&quot;
!
interface port-channel 1
description &quot;SERVER1&quot;
!
</code></pre>
<h1 id="checkyourconfiguration">Check your configuration</h1>
<h2 id="onyourlinuxbox">On your Linux box</h2>
<pre><code>cat /proc/net/bonding/bond0
</code></pre>
<h2 id="onyourswitch">On your switch</h2>
<h3 id="dell">Dell</h3>
<pre><code>show interfaces status port-channel 1
</code></pre>
<h3 id="cisco">Cisco</h3>
<pre><code>show interface po 1
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Create host and various things in Zabbix]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Recently I wanted to monitor some servers. In the old days I used Nagios or Cacti for such a purpose (both based on rrdtool) but I wanted to change and let Zabbix a chance.</p>
<p>Zabbix turn out a nice product I have to say. When it&apos;s easy to</p>]]></description><link>https://detiers.com/create-host-and-various-things-in-zabbix/</link><guid isPermaLink="false">5e0b323c603b49000176988a</guid><dc:creator><![CDATA[FG]]></dc:creator><pubDate>Fri, 25 Mar 2016 16:34:12 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Recently I wanted to monitor some servers. In the old days I used Nagios or Cacti for such a purpose (both based on rrdtool) but I wanted to change and let Zabbix a chance.</p>
<p>Zabbix turn out a nice product I have to say. When it&apos;s easy to tweak, then I tag it :)</p>
<p>So, I wanted to monitor running virtual machines, but the thing is they are not accessible from the Zabbix server. The hypervisor must send all VM statistics to the server.</p>
<p>In the little script I wrote, I just demonstrate the way I get all running VM from a qemu domain, and then create a corresponding host into Zabbix.</p>
<p>At the end, I create an item (CPU util but it could be something else) and create a graph with it.</p>
<script src="https://gist.github.com/frgaudet/5669778757e773676b43.js"></script>
<p>What I need to do next : a little script to update the cpu time and send the value to the server.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Use ceilometer API]]></title><description><![CDATA[<!--kg-card-begin: markdown--><h1 id="whydoineed">Why do I need ?</h1>
<p>First, get Nova instance name from Nova. Since this is not mandatory, it permit you to get VM names. More elegant.</p>
<p>Then get a ceilometer object and perform a request.</p>
<h1 id="gettheserverlistfromnova">Get the server list from Nova</h1>
<h2 id="retrieveenvironmentvariables">Retrieve environment variables</h2>
<pre><code>os_username = os.environ.get(&apos;OS_</code></pre>]]></description><link>https://detiers.com/use-ceilometer-api/</link><guid isPermaLink="false">5e0b323c603b490001769894</guid><dc:creator><![CDATA[FG]]></dc:creator><pubDate>Fri, 25 Mar 2016 15:44:17 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h1 id="whydoineed">Why do I need ?</h1>
<p>First, get Nova instance name from Nova. Since this is not mandatory, it permit you to get VM names. More elegant.</p>
<p>Then get a ceilometer object and perform a request.</p>
<h1 id="gettheserverlistfromnova">Get the server list from Nova</h1>
<h2 id="retrieveenvironmentvariables">Retrieve environment variables</h2>
<pre><code>os_username = os.environ.get(&apos;OS_USERNAME&apos;)
os_password = os.environ.get(&apos;OS_PASSWORD&apos;)
os_regionname = os.environ.get(&apos;OS_REGION_NAME&apos;)
os_project_name = os.environ.get(&apos;OS_PROJECT_NAME&apos;)
os_auth_url = os.environ.get(&apos;OS_AUTH_URL&apos;)
</code></pre>
<p>Then, create a nova client object :</p>
<pre><code>novaclient = client.Client(os_username,
             os_password, 
             os_project_name, 
             os_auth_url, 
             service_type=&quot;compute&quot;, 
             region_name=os_regionname)
</code></pre>
<h2 id="getserverlist">Get server list :</h2>
<pre><code>servers = novaclient.servers.list(detailed=True)
</code></pre>
<h2 id="ceilometer">Ceilometer</h2>
<p>Get now a ceilometer object :</p>
<pre><code>ceilometerclient = ceilometerclient.client.get_client(2,
         os_username=os_username,
         os_password=os_password,
         os_tenant_name=os_project_name,
         os_auth_url=os_auth_url,
         region_name=os_regionname)
</code></pre>
<h3 id="createthequery">Create the query</h3>
<p>Pick a server id from the list you just obtain the step before.</p>
<pre><code>query = [dict(field=&apos;resource_id&apos;, op=&apos;eq&apos;, value=server.id)]
</code></pre>
<h3 id="gettheresults">Get the results</h3>
<pre><code>cpu_util = ceilometerclient.samples.list(meter_name=&apos;cpu_util&apos;, limit=1, q=query)    
</code></pre>
<p>Full script is here :</p>
<script src="https://gist.github.com/frgaudet/7ee12f4a02a5a306c687.js"></script><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Annoying message...]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>If csf warn you everyday while trying to being updated with this message :</p>
<pre>URLGET set to use LWP but perl module is not installed, reverting to HTTP::Tiny</pre>
<p>Then install libwww-perl</p>
<pre>sudo apt-get install libwww-perl</pre>
<p>or</p>
<pre>yum install perl-libwww-perl</pre>
<p>under CentOS like.</p>
<!--kg-card-end: markdown-->]]></description><link>https://detiers.com/annoying-message/</link><guid isPermaLink="false">5e0b323c603b490001769893</guid><dc:creator><![CDATA[FG]]></dc:creator><pubDate>Sat, 22 Aug 2015 14:23:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>If csf warn you everyday while trying to being updated with this message :</p>
<pre>URLGET set to use LWP but perl module is not installed, reverting to HTTP::Tiny</pre>
<p>Then install libwww-perl</p>
<pre>sudo apt-get install libwww-perl</pre>
<p>or</p>
<pre>yum install perl-libwww-perl</pre>
<p>under CentOS like.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Change hostname from DNS server]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Sometimes, your want to change your hostname, more precisely you want your hostname actually reflecting your DNS change.</p>
<p>I won&apos;t talk when such a situation happen, but it could happen. So here is how I manage it. This above script runs under Ubuntu, but it should also runs</p>]]></description><link>https://detiers.com/change-hostname-from-dns-server/</link><guid isPermaLink="false">5e0b323c603b490001769889</guid><dc:creator><![CDATA[FG]]></dc:creator><pubDate>Fri, 22 May 2015 15:22:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Sometimes, your want to change your hostname, more precisely you want your hostname actually reflecting your DNS change.</p>
<p>I won&apos;t talk when such a situation happen, but it could happen. So here is how I manage it. This above script runs under Ubuntu, but it should also runs under any distro which use NetworkManager.</p>
<p>Drop this file into /etc/NetworkManager/dispatcher.d</p>
<script src="https://gist.github.com/frgaudet/0c8b033983d150c2ebc2.js"></script>
<p>Basically, it perform a DNS request, then compare the result to the hostname already defined. If the hostname has changed, the script update the network system files.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>