http://amd.co.at/adminw/index.php?title=Special:Contributions/Ch&feed=atom&limit=50&target=Ch&year=&month=AdminWiki - User contributions [en]2024-03-28T23:36:14ZFrom AdminWikiMediaWiki 1.16.2http://amd.co.at/adminwiki/Hardware/Remote_ManagementHardware/Remote Management2010-07-21T14:48:43Z<p>Ch: </p>
<hr />
<div>= HP Integrated Lights-Out (ILO) =<br />
Standard/onboard on HP DL3xx G3+ Servers. See http://h18004.www1.hp.com/products/servers/management/ilo/ .<br />
<br />
If you are running a Unix-like OS (= no graphical output), you usually will not really need the Advanced Edition. Of course, dont forget to turn off all graphical stuff (graphical GRUB menu, boot splashes, X11 on boot, etc).<br />
<br />
On the other hand, the Advanced Edition allows for a few nice usage cases:<br />
* full remote installation (RedHat EL needs X11 for full customization during install)<br />
* remote "offline" firmware upgrade (the HP Software Maintenance CD needs X11 and a Mouse)<br />
* CD/Floppy forwarding from your workstation, or from HTTP Servers<br />
You can enter the same Advanced Edition license key on multiple servers - but you cannot remove it from an ILO! You better have the correct count of Licenses ...<br />
<br />
Newer firmware versions (1.70+) support SSH and [http://www.dmtf.org/standards/smash SMASH CLP].<br />
<br />
= HP Integrated Lights-Out 2 (ILO2) =<br />
Standard/onboard on HP DL3xx G5+ Servers.<br />
<br />
Basically the same as the old HP ILO, but with new quirks :)<br />
<br />
* No longer has textmode support. You '''need''' the Advanced Edition if you want to see '''any''' Video Output.<br />
* But has a Virtual Serial Port, so you could use ttyS0 as your console.<br />
* SSH can only drive the VSP.<br />
<br />
Upgrade to at least 1.8x to fix most problems, including long login delays, spontanous ILO2 reboots, etc. Upgrade to 2.00 for a way faster web interface.<br />
<br />
== SSH-Tunnel for Java Console Applet ==<br />
<br />
sudo ssh -L 443:192.168.127.50:443 -L 22:192.168.127.50:22 -L 23:192.168.127.50:23 youruser@gatewayhost<br />
<br />
Note that you need to stop your locally running sshd before starting ssh, and then go to https://localhost/ in your browser (which will complain about the wrong hostname).<br />
<br />
== Virtual CD-ROM ==<br />
<br />
To set up a virtual CD-ROM, which uses an ISO File on a remote location as its source, try this, via ssh:<br />
<br />
<pre><br />
vm cdrom insert http://192.168.0.2/grml_1.0.iso<br />
vm cdrom set connect<br />
vm cdrom set boot_always<br />
</pre><br />
<br />
= Compaq RILOE =<br />
The good old plug-in cards from Compaq. Work in the DL360G1/2, DL380G1 and possibly in other servers. Supports text-mode only, but floppy images (can be used to load etherboot from the floppy image and then let it pxe-boot).<br />
<br />
= Dell ... =<br />
<br />
= [http://www.dmtf.org/standards/smash SMASH CLP] useful commands =<br />
<br />
* reset /system1/power ...</div>Chhttp://amd.co.at/adminwiki/MySQLMySQL2009-06-24T15:37:57Z<p>Ch: </p>
<hr />
<div>= Introduction =<br />
<br />
MySQL is the defacto standard database in the opensource world, but it's reign isn't undisputed. See the link list on the [[Databases]] page.<br />
<br />
<br />
== Database formats ==<br />
<br />
MySQL currently offers two database formats. MyISAM is the non-transactional in-place updating table. Think of it more like a relational textfile than a real database. InnoDB is the relational table-format, which MySQL promotes as being ACID-conform. It's less mature than MyISAM and it's behaviour with semi-corrupted tables (these things happen in the real world &trade;) have yet to be discovered by us *knocks on wood*.<br />
<br />
If you've got any data that is of value to you, you should strongly opt for a transactional database along with an application which utilizes those features.<br />
<br />
= Migration woes =<br />
<br />
Since 3.23 a few things have changed. Depending on the version jump one or more of those points may apply to you.<br />
<br />
== Password format ==<br />
<br />
If you're using "old" user tables and/or "old" clients to connect to your database make sure to put<br />
<br />
old_passwords = 1<br />
<br />
in your my.cnf.<br />
<br />
== Auth table changes ==<br />
<br />
= Replication =<br />
<br />
TODO<br />
<br />
Unfuck replication:<br />
<br />
* stop master/lock all tables<br />
<br />
* show master status, note position and file. alternatively, check filesize and name of binlog<br />
<br />
* stop slave<br />
<br />
* slave# rsync -av -P --delete rsync://master/mysql /var/lib/mysql<br />
<br />
* start slave with slavethread disabled or stopped master<br />
<br />
* CHANGE MASTER TO MASTER_HOST='<ip>', MASTER_USER='<user>', MASTER_PASSWORD='<pass>', MASTER_LOG_FILE='<file>', MASTER_LOG_POS=<pos>;<br />
<br />
* start slave thread or start master<br />
<br />
=Timezones=<br />
<br />
* store values you expect to consider the mysql time_zone setting as timestamp<br />
<br />
= Exporting =<br />
<br />
To export a MySQL Database, data-only (as schema will never be compatible across DBs), in a somewhat compatible mode, probably suitable for import into [[PostgreSQL]], you could try using this:<br />
<br />
mysqldump --extended-insert=off --compatible=ansi --lock-tables=off --data -n -t --add-locks=off [DBNAME]</div>Chhttp://amd.co.at/adminwiki/Hardware/Remote_ManagementHardware/Remote Management2007-11-08T15:08:29Z<p>Ch: correct ilo2 min. version number</p>
<hr />
<div>= HP Integrated Lights-Out (ILO) =<br />
Standard/onboard on HP DL3xx G3+ Servers. See http://h18004.www1.hp.com/products/servers/management/ilo/ .<br />
<br />
If you are running a Unix-like OS (= no graphical output), you usually will not really need the Advanced Edition. Of course, dont forget to turn off all graphical stuff (graphical GRUB menu, boot splashes, X11 on boot, etc).<br />
<br />
On the other hand, the Advanced Edition allows for a few nice usage cases:<br />
* full remote installation (RedHat EL needs X11 for full customization during install)<br />
* remote "offline" firmware upgrade (the HP Software Maintenance CD needs X11 and a Mouse)<br />
* CD/Floppy forwarding from your workstation, or from HTTP Servers<br />
You can enter the same Advanced Edition license key on multiple servers - but you cannot remove it from an ILO! You better have the correct count of Licenses ...<br />
<br />
Newer firmware versions (1.70+) support SSH and [http://www.dmtf.org/standards/smash SMASH CLP].<br />
<br />
= HP Integrated Lights-Out 2 (ILO2) =<br />
Standard/onboard on HP DL3xx G5+ Servers.<br />
<br />
Basically the same as the old HP ILO, but with new quirks :)<br />
<br />
* No longer has textmode support. You '''need''' the Advanced Edition if you want to see '''any''' Video Output.<br />
* But has a Virtual Serial Port, so you could use ttyS0 as your console.<br />
* SSH can only drive the VSP.<br />
<br />
Upgrade to at least 1.42 to fix most problems, including long login delays, spontanous ILO2 reboots, etc.<br />
<br />
== Virtual CD-ROM ==<br />
<br />
To set up a virtual CD-ROM, which uses an ISO File on a remote location as its source, try this, via ssh:<br />
<br />
<pre><br />
vm cdrom insert http://192.168.0.2/grml_1.0.iso<br />
vm cdrom set connect<br />
vm cdrom set boot_always<br />
</pre><br />
<br />
= Compaq RILOE =<br />
The good old plug-in cards from Compaq. Work in the DL360G1/2, DL380G1 and possibly in other servers. Supports text-mode only, but floppy images (can be used to load etherboot from the floppy image and then let it pxe-boot).<br />
<br />
= Dell ... =<br />
<br />
= [http://www.dmtf.org/standards/smash SMASH CLP] useful commands =<br />
<br />
* reset /system1/power ...</div>Chhttp://amd.co.at/adminwiki/Hardware/Remote_ManagementHardware/Remote Management2007-11-08T14:39:05Z<p>Ch: /* HP Integrated Lights-Out 2 (ILO2) */</p>
<hr />
<div>= HP Integrated Lights-Out (ILO) =<br />
Standard/onboard on HP DL3xx G3+ Servers. See http://h18004.www1.hp.com/products/servers/management/ilo/ .<br />
<br />
If you are running a Unix-like OS (= no graphical output), you usually will not really need the Advanced Edition. Of course, dont forget to turn off all graphical stuff (graphical GRUB menu, boot splashes, X11 on boot, etc).<br />
<br />
On the other hand, the Advanced Edition allows for a few nice usage cases:<br />
* full remote installation (RedHat EL needs X11 for full customization during install)<br />
* remote "offline" firmware upgrade (the HP Software Maintenance CD needs X11 and a Mouse)<br />
* CD/Floppy forwarding from your workstation, or from HTTP Servers<br />
You can enter the same Advanced Edition license key on multiple servers - but you cannot remove it from an ILO! You better have the correct count of Licenses ...<br />
<br />
Newer firmware versions (1.70+) support SSH and [http://www.dmtf.org/standards/smash SMASH CLP].<br />
<br />
= HP Integrated Lights-Out 2 (ILO2) =<br />
Standard/onboard on HP DL3xx G5+ Servers.<br />
<br />
Basically the same as the old HP ILO, but with new quirks :)<br />
<br />
* No longer has textmode support. You '''need''' the Advanced Edition if you want to see '''any''' Video Output.<br />
* But has a Virtual Serial Port, so you could use ttyS0 as your console.<br />
* SSH can only drive the VSP.<br />
<br />
Upgrade to at least 1.43 to fix most problems, including long login delays, spontanous ILO2 reboots, etc.<br />
<br />
== Virtual CD-ROM ==<br />
<br />
To set up a virtual CD-ROM, which uses an ISO File on a remote location as its source, try this, via ssh:<br />
<br />
<pre><br />
vm cdrom insert http://192.168.0.2/grml_1.0.iso<br />
vm cdrom set connect<br />
vm cdrom set boot_always<br />
</pre><br />
<br />
= Compaq RILOE =<br />
The good old plug-in cards from Compaq. Work in the DL360G1/2, DL380G1 and possibly in other servers. Supports text-mode only, but floppy images (can be used to load etherboot from the floppy image and then let it pxe-boot).<br />
<br />
= Dell ... =<br />
<br />
= [http://www.dmtf.org/standards/smash SMASH CLP] useful commands =<br />
<br />
* reset /system1/power ...</div>Chhttp://amd.co.at/adminwiki/Web_DevelopmentWeb Development2007-11-01T03:46:14Z<p>Ch: </p>
<hr />
<div>Not really the scope of the average administrator, but often enough we're faced with writing smaller (or larger) applications for our own use or are forced to fix code botched together by more or less competent developers.<br />
<br />
<br />
= The choice =<br />
<br />
Although PHP might be your first choice when doing "web development" that doesn't mean it's the best one.<br />
<br />
Food for thought:<br />
<br />
* [http://tnx.nl/php PHP in contrast to perl]<br />
* [http://czth.net/pH/PHPSucks PHPSucks]<br />
* [http://www.ukuug.org/events/linux2002/papers/html/php/ Experiences of using PHP in large websites]<br />
<br />
== Unbiased facts: ==<br />
<br />
* PHPs availability is much better on "foreign" servers.<br />
* PHP is much easier to use than Perl for beginners, because you can just throw HTML in PHP files (or vice versa).<br />
* Perl is much cleaner than PHP.<br />
* Perls webserver integration sucks. Neither CGI, nor mod_perl nor fastcgi give you the ease of use of PHP.<br />
<br />
= PHP =<br />
<br />
== Administration == <br />
* Securing<br />
* Tuning (Zend Optimizer & replacements)<br />
<br />
== Programming ==<br />
<br />
* [[Smarty]]<br />
* [[PEAR]]<br />
<br />
= Perl =<br />
* FastCGI vs. mod_perl<br />
* [[Perl Package Management]]<br />
<br />
= Python =<br />
<br />
= Ruby =<br />
<br />
= Java =<br />
* Tomcat</div>Chhttp://amd.co.at/adminwiki/OpenntpdOpenntpd2007-11-01T03:42:32Z<p>Ch: </p>
<hr />
<div>OpenNTPd is OpenBSD's ntpd.<br />
<br />
It's designed to be secure, easy to set up and run smoothly. Of course it can also act as a full featured NTP server and export the local time.<br />
<br />
On old Linux machines, openntpd comes in very handy, if your old ntpd won't talk to your fresh NTP Servers. Just fetch the "Portable" version from http://www.openntpd.org/portable.html and "make install" it.</div>Chhttp://amd.co.at/adminwiki/Main_PageMain Page2007-11-01T03:32:41Z<p>Ch: </p>
<hr />
<div>== The admin wiki. ==<br />
<br />
<table width=100%><br />
<tr><td valign=top><br />
<br />
<table width=50%><br />
<tr><br />
<td>[[Image:Icon-operating_systems.png]]</td><br />
<td>[[Operating Systems]]<br>Linux, *BSD, Solaris, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-networking.png]]</td><br />
<td>[[Networking]]<br>Remote Boot, Firewall, VPN, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-tools.png]]</td><br />
<td>[[Tools]]<br>CLI Tools, Editors, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-software_solutions.png]]</td><br />
<td>[[Software Solutions]]<br>Monitoring, Backup, AAA, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-hardware.png]]</td><br />
<td>[[Hardware]]<br>Servers, Storage, Management, ...</td><br />
</tr><tr><br />
<td><br></td><br />
</tr><tr><br />
<td>[[Image:Icon-about.png]]</td><td>[[AdminWiki:About|About]]<br>Motivation, the guys behind the scenes</td><br />
</tr><br />
</table><br />
<br />
</td><br />
<td valign=top><br />
<br />
<table width=50%><br />
<tr><br />
<td>[[Image:Icon-daemons.png]]</td><br />
<td>[[Daemons|Daemons and Services]]<br>HTTP, SMTP, IMAP, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-databases.png]]</td><br />
<td>[[Databases]]<br>MySQL, PostgreSQL, LDAP, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-webdevelopment.png]]</td><br />
<td>[[Web Development]]<br>Perl, PHP, FastCGI, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-clustering.png]]</td><br />
<td>[[Clustering|Clustering and HA]]<br>Load Balancing, Heartbeat, Redundancy, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-best_common_practices.png]]</td><br />
<td>[[Best common practices]]<br>If that was my server, I would do this differently...</td><br />
</tr><tr><br />
<td><br></td><br />
</tr><tr><br />
<td>[[Image:Icon-todo.png]]</td><td>[[Todo|Lots of work to be done]]<br>the long list</td><br />
</tr><br />
</table><br />
<br />
</td><br />
</tr><br />
</table></div>Chhttp://amd.co.at/adminwiki/Hardware/Remote_ManagementHardware/Remote Management2007-11-01T03:32:13Z<p>Ch: </p>
<hr />
<div>= HP Integrated Lights-Out (ILO) =<br />
Standard/onboard on HP DL3xx G3+ Servers. See http://h18004.www1.hp.com/products/servers/management/ilo/ .<br />
<br />
If you are running a Unix-like OS (= no graphical output), you usually will not really need the Advanced Edition. Of course, dont forget to turn off all graphical stuff (graphical GRUB menu, boot splashes, X11 on boot, etc).<br />
<br />
On the other hand, the Advanced Edition allows for a few nice usage cases:<br />
* full remote installation (RedHat EL needs X11 for full customization during install)<br />
* remote "offline" firmware upgrade (the HP Software Maintenance CD needs X11 and a Mouse)<br />
* CD/Floppy forwarding from your workstation, or from HTTP Servers<br />
You can enter the same Advanced Edition license key on multiple servers - but you cannot remove it from an ILO! You better have the correct count of Licenses ...<br />
<br />
Newer firmware versions (1.70+) support SSH and [http://www.dmtf.org/standards/smash SMASH CLP].<br />
<br />
= HP Integrated Lights-Out 2 (ILO2) =<br />
Standard/onboard on HP DL3xx G5+ Servers.<br />
<br />
Basically the same as the old HP ILO, but with new quirks :)<br />
<br />
* No longer has textmode support. You '''need''' the Advanced Edition if you want to see '''any''' Video Output.<br />
* But has a Virtual Serial Port, so you could use ttyS0 as your console.<br />
* SSH can only drive the VSP.<br />
<br />
Upgrade to at least 1.43 to fix most problems, including long login delays, spontanous ILO2 reboots, etc.<br />
<br />
= Compaq RILOE =<br />
The good old plug-in cards from Compaq. Work in the DL360G1/2, DL380G1 and possibly in other servers. Supports text-mode only, but floppy images (can be used to load etherboot from the floppy image and then let it pxe-boot).<br />
<br />
= Dell ... =<br />
<br />
= [http://www.dmtf.org/standards/smash SMASH CLP] useful commands =<br />
<br />
* reset /system1/power ...</div>Chhttp://amd.co.at/adminwiki/RsyncRsync2007-09-27T17:02:29Z<p>Ch: /* Minimum working config */</p>
<hr />
<div>= [http://rsync.samba.org/ rsync] =<br />
<br />
rsync is the perfect tool when you need to synchronize two files or file trees, be it just your homedirectory or a whole server. It's basic operation is like scp or rcp, but it has many more options which aim to keep traffic low if the amount of identical files on sender and receiver is high.<br />
== cookbook ==<br />
<br />
===Synchronize a directory===<br />
<br />
rsync -av -e [[ssh]] --delete <directory to sync> <username>@<remote host>:<remote directory><br />
<br />
===Migrating a server===<br />
<br />
To migrate a server,<br />
<br />
<pre><br />
rsync -avH -P --numeric-ids -x --delete / <additional mountpoints> <target><br />
</pre><br />
<br />
usually does the trick. This should result in an exact copy of the source tree and gives you fancy progress bars while you wait for the rsync run to finish ;).<br />
<br />
The --numeric-ids is necessary because you otherwise end up having garbled usernames when the source and destination OS use different username <-> userid mappings.<br />
<br />
===Minimum working daemon config===<br />
<br />
<pre><br />
log file = /var/log/rsyncd.log<br />
use chroot = yes<br />
<br />
[modulename]<br />
path = /path/to/module<br />
read only = yes<br />
list = yes<br />
transfer logging = yes <br />
</pre></div>Chhttp://amd.co.at/adminwiki/RedHatRedHat2006-06-10T14:24:46Z<p>Ch: /* OS Installation */</p>
<hr />
<div>= Package Management =<br />
<br />
The current choices for installing RPMs are: (listed by convience)<br />
* rpm --install: can only install a single remote rpm<br />
* up2date: you probably need an RedHat EL subscription<br />
* yum: the better up2date, still terrible to use<br />
* apt: ported from Debian, lastly a tool which does it right<br />
<br />
Useful options to rpm are <tt>-vh</tt>, which give you verbose output and a progress bar when doing package installation/removal.<br />
<br />
== Installing Kernels ==<br />
<br />
If you install a kernel using one of the tools above, better check /boot and /etc/grub.conf afterwards. At present, updating these files is job of the package manager, and not of the kernel rpm postinst script. Of course, all tools have implemented this differently and you simply can't rely on it to work.<br />
<br />
Also, always '''install''' kernels (rpm -ivh) instead of updating (rpm -Uvh) them. Updating will not preserve the old version ...<br />
<br />
== Building your own RPMs ==<br />
<br />
Never build RPMs as root. Spec files are free to specify any command they want, and can leave files around in your /-filesystem or, even worse, cause real damage to your installation. If you must build as root, better do it on a machine which you can reinstall/reimage quickly.<br />
<br />
<tt>rpmbuild --rebuild foo.src.rpm</tt> is the command of your choice. If you have a spec file instead, try with <tt>rpmbuild -ba foo.spec</tt>.<br />
<br />
FIXME: tell about setting up non-root rpmbuild<br />
<br />
= OS Installation =<br />
<br />
The best thing you can do is to pass a kickstart file to the installer. This way you don't need graphics support and you don't get the whole crap of packages you won't need. Have a look at the possible [http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/sysadmin-guide/s1-kickstart2-options.html kickstart options] (for RHEL4).<br />
<br />
Remote booting the installer using PXE works; you can use CDs, HTTP, FTP or NFS as the package source.<br />
<br />
Anaconda (the RedHat installer) leaves the config it used in /root/anaconda-ks.cfg after installation. You can use that as a starting point for your ks.cfg, or write one from scratch.<br />
<br />
For %packages you probably want at least: <br />
* e2fsprogs<br />
* grub<br />
* lvm2<br />
* @ text-internet (which gets you links, wget, etc). <br />
<br />
For serious servers, also install:<br />
* @ development-tools (so you rebuild SRPMS)<br />
* kernel-smp<br />
* kernel-devel/kernel-smp-devel (needed for custom drivers)<br />
* ntp<br />
* net-snmp (gets you snmpd)<br />
<br />
If you want to make your life a bit easier, also get:<br />
* screen<br />
* vim-enhanced<br />
* strace<br />
* rsync<br />
* lsof<br />
* xorg-x11 (this is just the base, so X11 forwarding over ssh works)<br />
* cvs<br />
<br />
= Useful Links =<br />
* [http://www.akadia.com/services/redhat_static_routes.html Setting up Static Routes on Redhat, past and present]</div>Chhttp://amd.co.at/adminwiki/RedHatRedHat2006-06-10T14:11:34Z<p>Ch: +osinstall</p>
<hr />
<div>= Package Management =<br />
<br />
The current choices for installing RPMs are: (listed by convience)<br />
* rpm --install: can only install a single remote rpm<br />
* up2date: you probably need an RedHat EL subscription<br />
* yum: the better up2date, still terrible to use<br />
* apt: ported from Debian, lastly a tool which does it right<br />
<br />
Useful options to rpm are <tt>-vh</tt>, which give you verbose output and a progress bar when doing package installation/removal.<br />
<br />
== Installing Kernels ==<br />
<br />
If you install a kernel using one of the tools above, better check /boot and /etc/grub.conf afterwards. At present, updating these files is job of the package manager, and not of the kernel rpm postinst script. Of course, all tools have implemented this differently and you simply can't rely on it to work.<br />
<br />
Also, always '''install''' kernels (rpm -ivh) instead of updating (rpm -Uvh) them. Updating will not preserve the old version ...<br />
<br />
== Building your own RPMs ==<br />
<br />
Never build RPMs as root. Spec files are free to specify any command they want, and can leave files around in your /-filesystem or, even worse, cause real damage to your installation. If you must build as root, better do it on a machine which you can reinstall/reimage quickly.<br />
<br />
<tt>rpmbuild --rebuild foo.src.rpm</tt> is the command of your choice. If you have a spec file instead, try with <tt>rpmbuild -ba foo.spec</tt>.<br />
<br />
FIXME: tell about setting up non-root rpmbuild<br />
<br />
= OS Installation =<br />
<br />
= Useful Links =<br />
* [http://www.akadia.com/services/redhat_static_routes.html Setting up Static Routes on Redhat, past and present]</div>Chhttp://amd.co.at/adminwiki/ICP_gdthICP gdth2006-06-04T03:08:41Z<p>Ch: </p>
<hr />
<div>A set of serious SCSI/SATA RAID controllers manufactured by [http://www.icp-vortex.com/ ICP Vortex].<br />
<br />
Linux supports them using the gdth driver (in-kernel). This works well, and <tt>icpcon</tt> (the ICP management console) works with it.</div>Chhttp://amd.co.at/adminwiki/Storage_solutionsStorage solutions2006-06-04T02:38:34Z<p>Ch: /* ICP Vortex */</p>
<hr />
<div>=Hostbased solutions=<br />
<br />
==Hardware raid controllers==<br />
<br />
===[[LSI Logic Megaraid|Megaraid]]===<br />
<br />
LSI Logic is one of the main manufacturers for SCSI Raid controllers. Their products are available under their own name and a variety of rebranded cards (Intel, etc).<br />
<br />
===ICP Vortex===<br />
<br />
ICP is the other one main SCSI Raid manafacturer. They have controllers based on [[ICP gdth|gdth]], and since they were bought by Adaptec some newer, incompatible controllers. Before that, ICP was, for some time, owned by Intel, so older Intel SCSI Raid cards are also gdth-based.<br />
<br />
===[[Promise Technology]]===<br />
<br />
Stay clear.<br />
<br />
===[[3ware]]===<br />
<br />
3ware produces low cost ATA and SATA RAID controllers. The common feeling about their speed and stability is mixed.<br />
<br />
=Centralized solutions=</div>Chhttp://amd.co.at/adminwiki/RedHatRedHat2006-06-04T02:37:14Z<p>Ch: /* Installing Kernels */</p>
<hr />
<div>= Package Management =<br />
<br />
The current choices for installing RPMs are: (listed by convience)<br />
* rpm --install: can only install a single remote rpm<br />
* up2date: you probably need an RedHat EL subscription<br />
* yum: the better up2date, still terrible to use<br />
* apt: ported from Debian, lastly a tool which does it right<br />
<br />
Useful options to rpm are <tt>-vh</tt>, which give you verbose output and a progress bar when doing package installation/removal.<br />
<br />
== Installing Kernels ==<br />
<br />
If you install a kernel using one of the tools above, better check /boot and /etc/grub.conf afterwards. At present, updating these files is job of the package manager, and not of the kernel rpm postinst script. Of course, all tools have implemented this differently and you simply can't rely on it to work.<br />
<br />
Also, always '''install''' kernels (rpm -ivh) instead of updating (rpm -Uvh) them. Updating will not preserve the old version ...<br />
<br />
== Building your own RPMs ==<br />
<br />
Never build RPMs as root. Spec files are free to specify any command they want, and can leave files around in your /-filesystem or, even worse, cause real damage to your installation. If you must build as root, better do it on a machine which you can reinstall/reimage quickly.<br />
<br />
<tt>rpmbuild --rebuild foo.src.rpm</tt> is the command of your choice. If you have a spec file instead, try with <tt>rpmbuild -ba foo.spec</tt>.<br />
<br />
FIXME: tell about setting up non-root rpmbuild<br />
<br />
= Useful Links =<br />
* [http://www.akadia.com/services/redhat_static_routes.html Setting up Static Routes on Redhat, past and present]</div>Chhttp://amd.co.at/adminwiki/RedHatRedHat2006-06-04T02:34:07Z<p>Ch: /* Useful Links */</p>
<hr />
<div>= Package Management =<br />
<br />
The current choices for installing RPMs are: (listed by convience)<br />
* rpm --install: can only install a single remote rpm<br />
* up2date: you probably need an RedHat EL subscription<br />
* yum: the better up2date, still terrible to use<br />
* apt: ported from Debian, lastly a tool which does it right<br />
<br />
Useful options to rpm are <tt>-vh</tt>, which give you verbose output and a progress bar when doing package installation/removal.<br />
<br />
== Installing Kernels ==<br />
<br />
If you install a kernel using one of the tools above, better check /boot and /etc/grub.conf afterwards. At present, updating these files is job of the package manager, and not of the kernel rpm postinst script. Of course, all tools have implemented this differently and you simply can't rely on it to work.<br />
<br />
Also, always '''install''' kernels (rpm -ivh) instead of updating (rpm -Uvh) them. Updating will not preserve the old version ...<br />
<br />
= Useful Links =<br />
* [http://www.akadia.com/services/redhat_static_routes.html Setting up Static Routes on Redhat, past and present]</div>Chhttp://amd.co.at/adminwiki/RedHatRedHat2006-06-04T02:33:56Z<p>Ch: /* Package Management */</p>
<hr />
<div>= Package Management =<br />
<br />
The current choices for installing RPMs are: (listed by convience)<br />
* rpm --install: can only install a single remote rpm<br />
* up2date: you probably need an RedHat EL subscription<br />
* yum: the better up2date, still terrible to use<br />
* apt: ported from Debian, lastly a tool which does it right<br />
<br />
Useful options to rpm are <tt>-vh</tt>, which give you verbose output and a progress bar when doing package installation/removal.<br />
<br />
== Installing Kernels ==<br />
<br />
If you install a kernel using one of the tools above, better check /boot and /etc/grub.conf afterwards. At present, updating these files is job of the package manager, and not of the kernel rpm postinst script. Of course, all tools have implemented this differently and you simply can't rely on it to work.<br />
<br />
Also, always '''install''' kernels (rpm -ivh) instead of updating (rpm -Uvh) them. Updating will not preserve the old version ...<br />
<br />
== Useful Links ==<br />
* [http://www.akadia.com/services/redhat_static_routes.html Setting up Static Routes on Redhat, past and present]</div>Chhttp://amd.co.at/adminwiki/RedHatRedHat2006-06-04T02:33:26Z<p>Ch: /* Installing Kernels */</p>
<hr />
<div>== Package Management ==<br />
<br />
The current choices for installing RPMs are: (listed by convience)<br />
* rpm --install: can only install a single remote rpm<br />
* up2date: you probably need an RedHat EL subscription<br />
* yum: the better up2date, still terrible to use<br />
* apt: ported from Debian, lastly a tool which does it right<br />
<br />
Useful options to rpm are <tt>-vh</tt>, which give you verbose output and a progress bar when doing package installation/removal.<br />
<br />
== Installing Kernels ==<br />
<br />
If you install a kernel using one of the tools above, better check /boot and /etc/grub.conf afterwards. At present, updating these files is job of the package manager, and not of the kernel rpm postinst script. Of course, all tools have implemented this differently and you simply can't rely on it to work.<br />
<br />
Also, always '''install''' kernels (rpm -ivh) instead of updating (rpm -Uvh) them. Updating will not preserve the old version ...<br />
<br />
== Useful Links ==<br />
* [http://www.akadia.com/services/redhat_static_routes.html Setting up Static Routes on Redhat, past and present]</div>Chhttp://amd.co.at/adminwiki/RedHatRedHat2006-06-04T02:33:07Z<p>Ch: </p>
<hr />
<div>== Package Management ==<br />
<br />
The current choices for installing RPMs are: (listed by convience)<br />
* rpm --install: can only install a single remote rpm<br />
* up2date: you probably need an RedHat EL subscription<br />
* yum: the better up2date, still terrible to use<br />
* apt: ported from Debian, lastly a tool which does it right<br />
<br />
Useful options to rpm are <tt>-vh</tt>, which give you verbose output and a progress bar when doing package installation/removal.<br />
<br />
== Installing Kernels ==<br />
<br />
If you install a kernel using one of the tools above, better check /boot and /etc/grub.conf afterwards. At present, updating these files is job of the package manager, and not of the kernel rpm postinst script. Of course, all tools have implemented this differently and you simply can't rely on it to work.<br />
<br />
Also, always 'install' kernels (rpm -ivh) instead of updating (rpm -Uvh) them. Updating will not preserve the old version ...<br />
<br />
== Useful Links ==<br />
* [http://www.akadia.com/services/redhat_static_routes.html Setting up Static Routes on Redhat, past and present]</div>Chhttp://amd.co.at/adminwiki/Promise_TechnologyPromise Technology2006-06-04T02:13:36Z<p>Ch: </p>
<hr />
<div>Promise Technology manufactures ATA and SATA (RAID) controllers, which usually feature Promise chips for the device interface and nearly always no RAID CPU, therefore they are "soft-RAIDs" (except the higher-priced models).<br />
<br />
Stay away from the Promise-based (PDCxxxxx) IDE-Controllers under Linux (and probably other Opensource OS too). They ''WILL'' cause (mostly silent) data corruption. Get a Silicon Image based controller instead ([http://geizhals.at/eu/a36191.html Dawicontrol DC-133] for example).<br />
<br />
The usual error you will see is: "attempt to access beyond end of device". Good luck.</div>Chhttp://amd.co.at/adminwiki/Promise_TechnologyPromise Technology2006-06-04T02:13:14Z<p>Ch: </p>
<hr />
<div>Promise Technology manufactures ATA and SATA (RAID) controllers, which usually feature Promise chips for the device interface and nearly always no RAID CPU, therefore they are "soft-RAIDs" (except the higher-priced models).<br />
<br />
Stay away from the Promise-based (PDCxxxxx) IDE-Controllers under Linux (and probably other Opensource OS too). They ''WILL'' cause (mostly silent) data corruption. Get a Silicon Image based controller instead ([http://geizhals.at/eu/a36191.html Dawicontrol DC-133] for example).<br />
<br />
The usual error you will see contains something like "attempt to access beyond end of device". Good luck.</div>Chhttp://amd.co.at/adminwiki/Promise_TechnologyPromise Technology2006-06-03T21:50:06Z<p>Ch: /* promise rant */</p>
<hr />
<div>Promise Technology manufactures ATA and SATA (RAID) controllers, which usually feature Promise chips for the device interface and no RAID CPU, therefore they are "soft-RAIDs".<br />
<br />
Almost everyone has some stories to tell about the usage of Promise controllers, most of these end with "stay away from Promise". <br />
<br />
Common to these trouble reports are:<br />
* Controller is used with an OS that is not Windows<br />
* More-than-nothing I/O<br />
* Log entries like "... tried to write beyond end of device ..."<br />
* RAID/Data was missing after a reboot</div>Chhttp://amd.co.at/adminwiki/Storage_solutionsStorage solutions2006-06-03T21:45:29Z<p>Ch: /* Hardware raid controllers */</p>
<hr />
<div>=Hostbased solutions=<br />
<br />
==Hardware raid controllers==<br />
<br />
===[[LSI Logic Megaraid|Megaraid]]===<br />
<br />
LSI Logic is one of the main manufacturers for SCSI Raid controllers. Their products are available under their own name and a variety of rebranded cards (Intel, etc).<br />
<br />
===ICP Vortex===<br />
<br />
ICP is the other one main SCSI Raid manafacturer. They have controllers based on [[gdth]], and since they were bought by Adaptec some newer, incompatible controllers. Before that, ICP was, for some time, owned by Intel, so older Intel SCSI Raid cards are also gdth-based.<br />
<br />
===[[Promise Technology]]===<br />
<br />
Stay clear.<br />
<br />
===[[3ware]]===<br />
<br />
3ware produces low cost ATA and SATA RAID controllers. The common feeling about their speed and stability is mixed.<br />
<br />
=Centralized solutions=</div>Chhttp://amd.co.at/adminwiki/Perl_Package_ManagementPerl Package Management2006-05-27T01:47:09Z<p>Ch: /* RedHat */</p>
<hr />
<div>It's important that you have an complete overview over which perl modules are installed on a server, in case that you ever need to migrate the setup. The package manager of your OS is a good facility, given that it has a consistent naming style for perl modules.<br />
<br />
=Linux=<br />
<br />
==Debian==<br />
<br />
Most modules are already packaged. Given a module-name of Foo::Bar <br />
apt-cache search foo bar perl<br />
should be able to find it. If it isn't available, install <tt>dh-make-perl</tt> and then run<br />
dh-make-perl --install --cpan Foo::Bar<br />
to automatically fetch the sources, build the module, create the package and install it.<br />
<br />
==RedHat==<br />
On older versions of RedHat, you could use cpanflute or cpanflute2, which was usually included in either the rpm.rpm or some rpm-devel.rpm. On newer versions, you should use [http://perl.arix.com/cpan2rpm/ cpan2rpm]. <br />
<br />
cpan2rpm can be installed by rebuilding the source rpm. You will also need Module::Build, which is unfortunately not available as an RPM on RHEL. Instead you can either install it from some source rpm, or build it yourself using cpan2rpm on another machine, or just install it from CPAN (and remove it after building it with cpan2rpm).<br />
<br />
When building rpms, you should at least specify the --packager option. Or just put it in your $HOME/.cpan2rpm file (you should build your [http://www.redhat.com/archives/rpm-list/2001-March/msg00413.html rpms as non-root]).<br />
<br />
To install a package Foo::Bar after building and fetching it from CPAN:<br />
cpan2rpm --install Foo::Bar<br />
<br />
To pass an option of "-l" to Makefile.PL, use --make-maker:<br />
cpan2rpm --install --make-maker "-l" Foo::Bar</div>Chhttp://amd.co.at/adminwiki/BigBrotherBigBrother2006-05-26T23:28:49Z<p>Ch: </p>
<hr />
<div>[http://www.bb4.org/ BigBrother] is based on a simply concept:<br />
* one server, which collects all data sent by<br />
* the clients, which run on every to-be monitored server.<br />
<br />
The client agents not only report good/failure, but also send other (related) status information, possibly long reports about your servers health.<br />
<br />
While this is a good concept, the original BigBrother implementation is crap. And they even charge you a lot for this bunch of unholy shell scripts. Avoid it.<br />
<br />
= BigSister =<br />
<br />
[http://bigsister.sourceforge.net/ BigSister] is a complete rewrite of the BigBrother system (which you aren't allowed to modify anyway). BS is written in OO-perl and has a clean concept of doing its stuff. It is also highly configurable and '''compatible''' with the BigBrother clients. So you can slowly migrate from a BB-based monitoring system to BS. Just upgrade the server/collector and your dreams become true (or at least sort of :).<br />
<br />
== Best practice ==<br />
(Most of this applies to BB as well.) Install the agents into some folder in /opt, probably /opt/monitoring/bsc. Have the clients run as a special user (maybe bs, or bb if you upgrade from BB), and have them write their status stuff into $BSHOME/tmp or $BSHOME/var. Don't forget to monitor the disk space for $BSHOME - else your agents may hang (and report ok while something goes wrong).<br />
<br />
It's advisable to send copies of the alerts to a special log email address, so you have an easily searchable source for previous alerts.</div>Chhttp://amd.co.at/adminwiki/Software_SolutionsSoftware Solutions2006-05-26T23:19:17Z<p>Ch: /* monitoring */</p>
<hr />
<div>= Solutions in Software =<br />
<br />
== Backup ==<br />
<br />
* [[rsync]]<br />
* [[bacula]] <br />
* [[arkeia]] (commercial)<br />
<br />
== monitoring ==<br />
<br />
* [[nagios]]<br />
* [[BigBrother]] and its derivates<br />
* [[munin]]<br />
* [[smokeping]] <br />
* [[monit]]<br />
* [[HotSaNIC]]<br />
* [[cricket]]<br />
* [[cacti]]<br />
<br />
== migrating servers ==<br />
<br />
* [[rsync/tar/cp]] -a <br />
<br />
== AAA ==<br />
<br />
* [[PAM/NSS]]<br />
* [[LDAP]]<br />
* [[RADIUS]]</div>Chhttp://amd.co.at/adminwiki/Best_common_practicesBest common practices2006-05-26T23:13:21Z<p>Ch: /* Environment */</p>
<hr />
<div>This should give you a rundown on the absolute minimum ''every'' server should have.<br />
<br />
= Time issues =<br />
<br />
== Keep in sync ==<br />
<br />
There is absolutely ''no'' excuse for not having a correctly synchronized clock. This will bite you when you you've to compare logfiles from multiple servers and cause problems when you need to deliver ''accurate'' logs (police investigation, etc.).<br />
<br />
The problem got worse in the last few years (at least that's my impression) because processors got faster and/or time-keeping-mechanisms sloppier. What the operating system basically does<ref>I'm not completely sure about that. If I'm horribly wrong here, please tell me so ;)</ref> when booting up is fetching the current time and date from the RTC, then taking a wild guess on how many CPU cycles<ref>(or any other time source, e.g. HPET)</ref> are approximately one second and then using this guesstimate as long as the OS runs, which unfortunately is almost never accuracte. Excessive IRQ usage, CPU cycle modulation (power saving) and other factors might also increase the inaccuracy.<br />
<br />
What a NTP daemon basically does is comparing the system time with an external timesource (usually a NTP server), estimating on how far off the OS is and then disciplining the system time. It also tracks the inaccuracy of the system clock so that it can keep the clock in sync even when the ntp server should be unreachable for longer periods.<br />
<br />
== Never trust the RTC ==<br />
<br />
Another major issue are wrong times in the RTC. You have to ensure that your system time is correct ''before'' your operating switches to multi-user mode.<br />
<br />
Common scenario in DST-countries:<br />
<br />
*Your RTC is set to the local timezone.<br />
*Your server has an uptime >= 180 days, meaning that it probably has passed a DST<ref>Daylight saving time</ref> boundary.<br />
*Your server crashes.<br />
<br />
<br />
At this point, if you haven't taken any precautions, you're fucked as soon as the server is online again.<br />
<br />
* Best case: wrong logfile-entries and a few incorrect mtimes on files. <br />
* Worst case: Important business data (accounting, transactions, etc.) have the wrong timestamps. Good luck correcting these by hand.<br />
<br />
<br />
There are a few solutions to this problem:<br />
<br />
* Put ntpdate in your startup scripts ''after'' your network has initialized and ''before'' ntp-server starts. Test it!<br />
:This has the drawback that when the network or your ntp-server of choice is down you'll still run into troubles<br />
* Have hwclock write the system time to the RTC every now and then.<br />
:This is still dangerous, since there's a window where your server will boot with the wrong time in the RTC, but it minimizes the risk noticeably.<br />
* Set the hardware-clock to UTC<br />
:Untested. If anybody successfully uses this in a DST-zone, please contact me.<br />
<br />
= Installed software =<br />
<br />
This is the absolute minimum of software ''every'' server should have installed.<br />
<br />
* a working compiler and linker toolchain + headers.<br />
* A syscall-level diagnostic tool like strace/truss/etc.<br />
* A usable web-browser. links, lynx, elinks, etc.<br />
* A usable ftp-client. ncftp or lftp.<br />
* A multi-purpose download agent. wget or curl.<br />
* tcpdump<br />
* lsof<br />
* A working vim. Not old vi, not ed, just vim.<br />
<br />
Failure to meet these criterias will catch up with you someday when you expect it the least.<br />
<br />
In many cases you will also need xauth, so ssh/X11 forwarding can work. You don't need it now, but you will need it at some point.<br />
<br />
= Environment =<br />
Set a clean environment: set $EDITOR to vi/vim, set $LANG to <tt>en_US.UTF-8</tt> (or make sure it is ''unset'').<br />
<br />
= Footnotes =<br />
<references/><br />
<br />
<br />
= The rest =<br />
<br />
* backup<br />
* monitoring<br />
* sane logging<br />
* handling of (security) updates</div>Chhttp://amd.co.at/adminwiki/Best_common_practicesBest common practices2006-05-26T23:12:38Z<p>Ch: </p>
<hr />
<div>This should give you a rundown on the absolute minimum ''every'' server should have.<br />
<br />
= Time issues =<br />
<br />
== Keep in sync ==<br />
<br />
There is absolutely ''no'' excuse for not having a correctly synchronized clock. This will bite you when you you've to compare logfiles from multiple servers and cause problems when you need to deliver ''accurate'' logs (police investigation, etc.).<br />
<br />
The problem got worse in the last few years (at least that's my impression) because processors got faster and/or time-keeping-mechanisms sloppier. What the operating system basically does<ref>I'm not completely sure about that. If I'm horribly wrong here, please tell me so ;)</ref> when booting up is fetching the current time and date from the RTC, then taking a wild guess on how many CPU cycles<ref>(or any other time source, e.g. HPET)</ref> are approximately one second and then using this guesstimate as long as the OS runs, which unfortunately is almost never accuracte. Excessive IRQ usage, CPU cycle modulation (power saving) and other factors might also increase the inaccuracy.<br />
<br />
What a NTP daemon basically does is comparing the system time with an external timesource (usually a NTP server), estimating on how far off the OS is and then disciplining the system time. It also tracks the inaccuracy of the system clock so that it can keep the clock in sync even when the ntp server should be unreachable for longer periods.<br />
<br />
== Never trust the RTC ==<br />
<br />
Another major issue are wrong times in the RTC. You have to ensure that your system time is correct ''before'' your operating switches to multi-user mode.<br />
<br />
Common scenario in DST-countries:<br />
<br />
*Your RTC is set to the local timezone.<br />
*Your server has an uptime >= 180 days, meaning that it probably has passed a DST<ref>Daylight saving time</ref> boundary.<br />
*Your server crashes.<br />
<br />
<br />
At this point, if you haven't taken any precautions, you're fucked as soon as the server is online again.<br />
<br />
* Best case: wrong logfile-entries and a few incorrect mtimes on files. <br />
* Worst case: Important business data (accounting, transactions, etc.) have the wrong timestamps. Good luck correcting these by hand.<br />
<br />
<br />
There are a few solutions to this problem:<br />
<br />
* Put ntpdate in your startup scripts ''after'' your network has initialized and ''before'' ntp-server starts. Test it!<br />
:This has the drawback that when the network or your ntp-server of choice is down you'll still run into troubles<br />
* Have hwclock write the system time to the RTC every now and then.<br />
:This is still dangerous, since there's a window where your server will boot with the wrong time in the RTC, but it minimizes the risk noticeably.<br />
* Set the hardware-clock to UTC<br />
:Untested. If anybody successfully uses this in a DST-zone, please contact me.<br />
<br />
= Installed software =<br />
<br />
This is the absolute minimum of software ''every'' server should have installed.<br />
<br />
* a working compiler and linker toolchain + headers.<br />
* A syscall-level diagnostic tool like strace/truss/etc.<br />
* A usable web-browser. links, lynx, elinks, etc.<br />
* A usable ftp-client. ncftp or lftp.<br />
* A multi-purpose download agent. wget or curl.<br />
* tcpdump<br />
* lsof<br />
* A working vim. Not old vi, not ed, just vim.<br />
<br />
Failure to meet these criterias will catch up with you someday when you expect it the least.<br />
<br />
In many cases you will also need xauth, so ssh/X11 forwarding can work. You don't need it now, but you will need it at some point.<br />
<br />
= Environment =<br />
Set a clean environment: set $EDITOR to vi/vim, set $LANG (or make sure it is ''unset'').<br />
<br />
= Footnotes =<br />
<references/><br />
<br />
<br />
= The rest =<br />
<br />
* backup<br />
* monitoring<br />
* sane logging<br />
* handling of (security) updates</div>Chhttp://amd.co.at/adminwiki/Best_common_practicesBest common practices2006-05-26T23:11:40Z<p>Ch: /* Installed software */</p>
<hr />
<div>This should give you a rundown on the absolute minimum ''every'' server should have.<br />
<br />
= Time issues =<br />
<br />
== Keep in sync ==<br />
<br />
There is absolutely ''no'' excuse for not having a correctly synchronized clock. This will bite you when you you've to compare logfiles from multiple servers and cause problems when you need to deliver ''accurate'' logs (police investigation, etc.).<br />
<br />
The problem got worse in the last few years (at least that's my impression) because processors got faster and/or time-keeping-mechanisms sloppier. What the operating system basically does<ref>I'm not completely sure about that. If I'm horribly wrong here, please tell me so ;)</ref> when booting up is fetching the current time and date from the RTC, then taking a wild guess on how many CPU cycles<ref>(or any other time source, e.g. HPET)</ref> are approximately one second and then using this guesstimate as long as the OS runs, which unfortunately is almost never accuracte. Excessive IRQ usage, CPU cycle modulation (power saving) and other factors might also increase the inaccuracy.<br />
<br />
What a NTP daemon basically does is comparing the system time with an external timesource (usually a NTP server), estimating on how far off the OS is and then disciplining the system time. It also tracks the inaccuracy of the system clock so that it can keep the clock in sync even when the ntp server should be unreachable for longer periods.<br />
<br />
== Never trust the RTC ==<br />
<br />
Another major issue are wrong times in the RTC. You have to ensure that your system time is correct ''before'' your operating switches to multi-user mode.<br />
<br />
Common scenario in DST-countries:<br />
<br />
*Your RTC is set to the local timezone.<br />
*Your server has an uptime >= 180 days, meaning that it probably has passed a DST<ref>Daylight saving time</ref> boundary.<br />
*Your server crashes.<br />
<br />
<br />
At this point, if you haven't taken any precautions, you're fucked as soon as the server is online again.<br />
<br />
* Best case: wrong logfile-entries and a few incorrect mtimes on files. <br />
* Worst case: Important business data (accounting, transactions, etc.) have the wrong timestamps. Good luck correcting these by hand.<br />
<br />
<br />
There are a few solutions to this problem:<br />
<br />
* Put ntpdate in your startup scripts ''after'' your network has initialized and ''before'' ntp-server starts. Test it!<br />
:This has the drawback that when the network or your ntp-server of choice is down you'll still run into troubles<br />
* Have hwclock write the system time to the RTC every now and then.<br />
:This is still dangerous, since there's a window where your server will boot with the wrong time in the RTC, but it minimizes the risk noticeably.<br />
* Set the hardware-clock to UTC<br />
:Untested. If anybody successfully uses this in a DST-zone, please contact me.<br />
<br />
= Installed software =<br />
<br />
This is the absolute minimum of software ''every'' server should have installed.<br />
<br />
* a working compiler and linker toolchain + headers.<br />
* A syscall-level diagnostic tool like strace/truss/etc.<br />
* A usable web-browser. links, lynx, elinks, etc.<br />
* A usable ftp-client. ncftp or lftp.<br />
* A multi-purpose download agent. wget or curl.<br />
* tcpdump<br />
* lsof<br />
* A working vim. Not old vi, not ed, just vim.<br />
<br />
Failure to meet these criterias will catch up with you someday when you expect it the least.<br />
<br />
In many cases you will also need xauth, so ssh/X11 forwarding can work. You don't need it now, but you will need it at some point.<br />
<br />
= Footnotes =<br />
<references/><br />
<br />
<br />
= The rest =<br />
<br />
* backup<br />
* monitoring<br />
* sane logging<br />
* handling of (security) updates</div>Chhttp://amd.co.at/adminwiki/Perl_Package_ManagementPerl Package Management2006-05-26T22:23:44Z<p>Ch: </p>
<hr />
<div>=Linux=<br />
<br />
==Debian==<br />
dh-make-perl<br />
<br />
==RedHat==<br />
On older versions of RedHat, you could use cpanflute or cpanflute2, which was usually included in either the rpm.rpm or some rpm-devel.rpm. On newer versions, you should use [http://perl.arix.com/cpan2rpm/ cpan2rpm]. <br />
<br />
cpan2rpm can be installed by rebuilding the source rpm. You will also need Module::Build, which is unfortunately not available as an RPM on RHEL. Instead you can either install it from some source rpm, or build it yourself using cpan2rpm on another machine, or just install it from CPAN (and remove it after building it with cpan2rpm).<br />
<br />
When building rpms, you should at least specify the --packager option. Or just put it in your $HOME/.cpan2rpm file (you should build your [http://www.redhat.com/archives/rpm-list/2001-March/msg00413.html rpms as non-root]).</div>Chhttp://amd.co.at/adminwiki/Web_DevelopmentWeb Development2006-05-26T22:18:12Z<p>Ch: /* Perl */</p>
<hr />
<div>Not really the scope of the average administrator, but often enough we're faced with writing smaller (or larger) applications for our own use or are forced to fix code botched together by more or less competent developers.<br />
<br />
<br />
= The choice =<br />
<br />
Although PHP might be your first choice when doing "web development" that doesn't mean it's the best one.<br />
<br />
Food for thought:<br />
<br />
* [http://tnx.nl/php PHP in contrast to perl]<br />
* [http://czth.net/pH/PHPSucks PHPSucks]<br />
* [http://www.ukuug.org/events/linux2002/papers/html/php/ Experiences of using PHP in large websites]<br />
<br />
== Unbiased facts: ==<br />
<br />
* PHPs availability is much better on "foreign" servers.<br />
* PHP is much easier to use than Perl for beginners, because you can just throw HTML in PHP files (or vice versa).<br />
* Perl is much cleaner than PHP.<br />
* Perls webserver integration sucks. Neither CGI, nor mod_perl nor fastcgi give you the ease of use of PHP.<br />
<br />
= PHP =<br />
<br />
== Administration == <br />
* Securing<br />
* Tuning (Zend Optimizer & replacements)<br />
<br />
== Programming ==<br />
<br />
* [[Smarty]]<br />
* [[PEAR]]<br />
<br />
= Perl =<br />
* FastCGI vs. mod_perl<br />
* [[Perl Package Management]]<br />
<br />
= Python =<br />
<br />
= Java =<br />
* Tomcat</div>Chhttp://amd.co.at/adminwiki/Web_DevelopmentWeb Development2006-05-26T22:18:04Z<p>Ch: /* Perl */</p>
<hr />
<div>Not really the scope of the average administrator, but often enough we're faced with writing smaller (or larger) applications for our own use or are forced to fix code botched together by more or less competent developers.<br />
<br />
<br />
= The choice =<br />
<br />
Although PHP might be your first choice when doing "web development" that doesn't mean it's the best one.<br />
<br />
Food for thought:<br />
<br />
* [http://tnx.nl/php PHP in contrast to perl]<br />
* [http://czth.net/pH/PHPSucks PHPSucks]<br />
* [http://www.ukuug.org/events/linux2002/papers/html/php/ Experiences of using PHP in large websites]<br />
<br />
== Unbiased facts: ==<br />
<br />
* PHPs availability is much better on "foreign" servers.<br />
* PHP is much easier to use than Perl for beginners, because you can just throw HTML in PHP files (or vice versa).<br />
* Perl is much cleaner than PHP.<br />
* Perls webserver integration sucks. Neither CGI, nor mod_perl nor fastcgi give you the ease of use of PHP.<br />
<br />
= PHP =<br />
<br />
== Administration == <br />
* Securing<br />
* Tuning (Zend Optimizer & replacements)<br />
<br />
== Programming ==<br />
<br />
* [[Smarty]]<br />
* [[PEAR]]<br />
<br />
= Perl =<br />
* FastCGI vs. mod_perl<br />
* [Perl Package Management]<br />
<br />
= Python =<br />
<br />
= Java =<br />
* Tomcat</div>Chhttp://amd.co.at/adminwiki/RedHatRedHat2006-05-25T22:43:55Z<p>Ch: </p>
<hr />
<div>== Useful Links ==<br />
* [http://www.akadia.com/services/redhat_static_routes.html Setting up Static Routes on Redhat, past and present]</div>Chhttp://amd.co.at/adminwiki/Linux_DistributionsLinux Distributions2006-05-25T22:43:30Z<p>Ch: /* Redhat */</p>
<hr />
<div>= [[Debian]] =<br />
<br />
The first choice<br />
<br />
= [[RedHat]] =<br />
RedHat and derivates.<br />
<br />
= SuSE =</div>Chhttp://amd.co.at/adminwiki/PAM/NSSPAM/NSS2006-05-25T03:36:40Z<p>Ch: /* Pluggable Authentication Modules */</p>
<hr />
<div>PAM and NSS are the two services you have to mess around with if you need more than passwd and shadow. Both use backends as their data sources, and these usually have seperate configuration files. <br />
<br />
= Name Switch Service = <br />
<br />
NSS is, basically, the [http://www.gnu.org/software/libc/ glibc] resolver. It can resolve lots of different stuff for you: hosts, networks, protocol names, users, groups, etc. Data for the resolver can be provided by more than one backend, but not all backends make sense for every data type ("database" in NSS-speak).<br />
<br />
<tt>/etc/nsswitch.conf</tt> defines which backends glibc will use to resolve a particular data type. The backends are distributed in form of Shared Objects, carrying the names /lib/libnss_*.so* . Which databases are available is documented in <tt>man nsswitch.conf</tt>. Changes to nsswitch.conf will require a restart of the affected applications.<br />
<br />
== Caching ==<br />
<tt>nscd</tt> provides the Name Service Caching Daemon, which caches the output of the resolver backends. Running nscd gets you a big speed improvement, but burns you when debugging NSS problems. In this case just stop nscd, your applications will revert to non-caching mode automatically. If you see strange behaviour or old data it's a good idea to restart nscd.<br />
<br />
== Troubleshooting Multicast DNS packets ==<br />
Newer glibcs ship with support for MDNS - [http://www.multicastdns.org/ multicast DNS]. Useful if you have a home network, no working DNS, probably a [http://en.wikipedia.org/wiki/Zeroconf .local domain] and a recent Linux distro or OS X systems (Windows?). <br />
If you 'do' have a working DNS for your .local domain (e.g. company lan), this gets quite annoying (slow, unneeded network traffic, etc).<br />
<br />
The solution: remove <tt>mdns</tt> from the <tt>hosts:</tt> line in nsswitch.conf. Reboot, and you should have gotten rid of that mdns stuff. <br />
Sadly, some particular versions of glibc distributed by some vendors (SuSE) have <tt>mdns</tt> support built-in and it can't be disabled. Rumour has it, that there are now updated rpms which fix this.<br />
<br />
= Pluggable Authentication Modules =<br />
While NSS provides the lookup and mapping of your users, [http://www.kernel.org/pub/linux/libs/pam/ PAM] will provide the login handling, authentication and session setup. <br />
<br />
PAM is configured per-application, and applications will have to support it explicitly (most daemons now support pam, [[vsftpd]] NOT by default). Its config files are <tt>/etc/pam.conf</tt> (rather obsolete) and <tt>/etc/pam.d/{service name}</tt>.<br />
<br />
Debian provides a set of files called common-* which get included in every service configuration file. Modify them if you want to setup server-wide PAM. RedHat does not provide such a thing, but most configurations can be set up using the <tt>authconfig</tt> program.<br />
<br />
PAM backends are distributed as Shared Object files, located in /lib/security/pam_*.<br />
<br />
= Upgrade Troubles =<br />
Be careful when upgrading glibc and/or pam/nss backends: it's easy to shoot yourself. <br />
<br />
Usually running applications will have to be restarted after upgrading one of libc or backends, due to the nature of the integration of libc with the backends (libc dyna-links the backends at runtime and sometimes releases them - which will cause problems when it loads them again but are different versions then).</div>Chhttp://amd.co.at/adminwiki/Best_common_practicesBest common practices2006-05-25T03:34:46Z<p>Ch: /* Obey the RTC */</p>
<hr />
<div>This should give you a rundown on the absolute minimum ''every'' server should have.<br />
<br />
= Time issues =<br />
<br />
== Keep in sync ==<br />
<br />
There is absolutely ''no'' excuse for not having a correctly synchronized clock. This will bite you when you you've to compare logfiles from multiple servers and cause problems when you need to deliver ''accurate'' logs (police investigation, etc.).<br />
<br />
The problem got worse in the last few years (at least that's my impression) because processors got faster and/or time-keeping-mechanisms sloppier. What the operating system basically does<ref>I'm not completely sure about that. If I'm horribly wrong here, please tell me so ;)</ref> when booting up is fetching the current time and date from the RTC, then taking a wild guess on how many CPU cycles<ref>(or any other time source, e.g. HPET)</ref> are approximately one second and then using this guesstimate as long as the OS runs, which unfortunately is almost never accuracte. Excessive IRQ usage, CPU cycle modulation (power saving) and other factors might also increase the inaccuracy.<br />
<br />
What a NTP daemon basically does is comparing the system time with an external timesource (usually a NTP server), estimating on how far off the OS is and then disciplining the system time. It also tracks the inaccuracy of the system clock so that it can keep the clock in sync even when the ntp server should be unreachable for longer periods.<br />
<br />
== Obey the RTC ==<br />
<br />
Another major issue are wrong times in the RTC. You have to ensure that your system time is correct ''before'' your operating switches to multi-user mode.<br />
<br />
Common scenario in DST-countries:<br />
<br />
*Your RTC is set to the local timezone.<br />
*Your server has an uptime >= 180 days, meaning that it probably has passed a DST<ref>Daylight saving time</ref> boundary.<br />
*Your server crashes.<br />
<br />
<br />
At this point, if you haven't taken any precautions, you're fucked as soon as the server is online again.<br />
<br />
* Best case: wrong logfile-entries and a few incorrect mtimes on files. <br />
* Worst case: Important business data (accounting, transactions, etc.) have the wrong timestamps. Good luck correcting these by hand.<br />
<br />
<br />
There are a few solutions to this problem:<br />
<br />
* Put ntpdate in your startup scripts ''after'' your network has initialized and ''before'' ntp-server starts. Test it!<br />
:This has the drawback that when the network or your ntp-server of choice is down you'll still run into troubles<br />
* Have hwclock write the system time to the RTC every now and then.<br />
:This is still dangerous, since there's a window where your server will boot with the wrong time in the RTC, but it minimizes the risk noticeably.<br />
* Set the hardware-clock to UTC<br />
:Untested. If anybody successfully uses this in a DST-zone, please contact me.<br />
<br />
= Footnotes =<br />
<references/><br />
<br />
= The rest =<br />
<br />
* backup<br />
* monitoring<br />
* sane logging<br />
* handling of (security) updates<br />
* minimum amount of installed packages</div>Chhttp://amd.co.at/adminwiki/PAM/NSSPAM/NSS2006-05-25T03:20:59Z<p>Ch: /* Troubleshooting Multicast DNS packets */</p>
<hr />
<div>PAM and NSS are the two services you have to mess around with if you need more than passwd and shadow. Both use backends as their data sources, and these usually have seperate configuration files. <br />
<br />
= Name Switch Service = <br />
<br />
NSS is, basically, the [http://www.gnu.org/software/libc/ glibc] resolver. It can resolve lots of different stuff for you: hosts, networks, protocol names, users, groups, etc. Data for the resolver can be provided by more than one backend, but not all backends make sense for every data type ("database" in NSS-speak).<br />
<br />
<tt>/etc/nsswitch.conf</tt> defines which backends glibc will use to resolve a particular data type. The backends are distributed in form of Shared Objects, carrying the names /lib/libnss_*.so* . Which databases are available is documented in <tt>man nsswitch.conf</tt>. Changes to nsswitch.conf will require a restart of the affected applications.<br />
<br />
== Caching ==<br />
<tt>nscd</tt> provides the Name Service Caching Daemon, which caches the output of the resolver backends. Running nscd gets you a big speed improvement, but burns you when debugging NSS problems. In this case just stop nscd, your applications will revert to non-caching mode automatically. If you see strange behaviour or old data it's a good idea to restart nscd.<br />
<br />
== Troubleshooting Multicast DNS packets ==<br />
Newer glibcs ship with support for MDNS - [http://www.multicastdns.org/ multicast DNS]. Useful if you have a home network, no working DNS, probably a [http://en.wikipedia.org/wiki/Zeroconf .local domain] and a recent Linux distro or OS X systems (Windows?). <br />
If you 'do' have a working DNS for your .local domain (e.g. company lan), this gets quite annoying (slow, unneeded network traffic, etc).<br />
<br />
The solution: remove <tt>mdns</tt> from the <tt>hosts:</tt> line in nsswitch.conf. Reboot, and you should have gotten rid of that mdns stuff. <br />
Sadly, some particular versions of glibc distributed by some vendors (SuSE) have <tt>mdns</tt> support built-in and it can't be disabled. Rumour has it, that there are now updated rpms which fix this.<br />
<br />
= Pluggable Authentication Modules =<br />
While NSS provides the lookup and mapping of your users, [http://www.kernel.org/pub/linux/libs/pam/ PAM] will provide the way in itself (e.g. handling login, authentication and session setup). <br />
<br />
PAM is configured per-application, and applications will have to support it explicitly (most daemons now support pam, [[vsftpd]] NOT by default). It's config files are <tt>/etc/pam.conf</tt> (rather obsolete) and <tt>/etc/pam.d/{service name}</tt>.<br />
<br />
Debian provides a set of files called common-* which get included in every service configuration file. Modify them if you want to setup server-wide PAM. RedHat does not provide such a thing, but most configurations can be set up using the <tt>authconfig</tt> program.<br />
<br />
PAM backends are distributed as Shared Object files, located in /lib/security/pam_*.<br />
<br />
= Upgrade Troubles =<br />
Be careful when upgrading glibc and/or pam/nss backends: it's easy to shoot yourself. <br />
<br />
Usually running applications will have to be restarted after upgrading one of libc or backends, due to the nature of the integration of libc with the backends (libc dyna-links the backends at runtime and sometimes releases them - which will cause problems when it loads them again but are different versions then).</div>Chhttp://amd.co.at/adminwiki/PAM/NSSPAM/NSS2006-05-25T03:14:17Z<p>Ch: </p>
<hr />
<div>PAM and NSS are the two services you have to mess around with if you need more than passwd and shadow. Both use backends as their data sources, and these usually have seperate configuration files. <br />
<br />
= Name Switch Service = <br />
<br />
NSS is, basically, the [http://www.gnu.org/software/libc/ glibc] resolver. It can resolve lots of different stuff for you: hosts, networks, protocol names, users, groups, etc. Data for the resolver can be provided by more than one backend, but not all backends make sense for every data type ("database" in NSS-speak).<br />
<br />
<tt>/etc/nsswitch.conf</tt> sets up which backends glibc will use to resolve a particular data type. The backends are distributed in form of Shared Objects, carrying the names /lib/libnss_*.so* . Which databases are available is doc'd in <tt>man nsswitch.conf</tt>. Changes to nsswitch.conf will require a restart of the affected applications.<br />
<br />
== Caching ==<br />
<tt>nscd</tt> provides the Name Service Caching Daemon, which caches the output of the resolver backends. Running nscd gets you a big speed improvement, but burns you when debugging NSS problems. In this case just stop nscd, your applications will revert to non-caching mode automatically. If you see strange behaviour or old data it's a good idea to restart nscd.<br />
<br />
== Troubleshooting Multicast DNS packets ==<br />
Newer glibcs ship with support for MDNS - multicast DNS. Useful if you have a home network, no working DNS, probably a .local domain and a recent Linux distro or OS X systems (Windows?). <br />
If you 'do' have a working DNS for your .local domain (e.g. company lan), this gets quite annoying (slow, unneeded network traffic, etc).<br />
<br />
The solution: remove <tt>mdns</tt> from the <tt>hosts:</tt> line in nsswitch.conf. Reboot, and you should have gotten rid of that mdns stuff. <br />
Sadly, some particular versions of glibc distributed by some vendors (SuSE) have <tt>mdns</tt> support built-in and it can't be disabled. Rumour has it, that there are now updated rpms which fix this.<br />
<br />
<br />
= Pluggable Authentication Modules =<br />
While NSS provides the lookup and mapping of your users, [http://www.kernel.org/pub/linux/libs/pam/ PAM] will provide the way in itself (e.g. handling login, authentication and session setup). <br />
<br />
PAM is configured per-application, and applications will have to support it explicitly (most daemons now support pam, [[vsftpd]] NOT by default). It's config files are <tt>/etc/pam.conf</tt> (rather obsolete) and <tt>/etc/pam.d/{service name}</tt>.<br />
<br />
Debian provides a set of files called common-* which get included in every service configuration file. Modify them if you want to setup server-wide PAM. RedHat does not provide such a thing, but most configurations can be set up using the <tt>authconfig</tt> program.<br />
<br />
PAM backends are distributed as Shared Object files, located in /lib/security/pam_*.<br />
<br />
= Upgrade Troubles =<br />
Be careful when upgrading glibc and/or pam/nss backends: it's easy to shoot yourself. <br />
<br />
Usually running applications will have to be restarted after upgrading one of libc or backends, due to the nature of the integration of libc with the backends (libc dyna-links the backends at runtime and sometimes releases them - which will cause problems when it loads them again but are different versions then).</div>Chhttp://amd.co.at/adminwiki/Software_SolutionsSoftware Solutions2006-05-25T03:12:47Z<p>Ch: /* AAA */</p>
<hr />
<div>= Solutions in Software =<br />
<br />
== Backup ==<br />
<br />
* [[rsync]]<br />
* [[bacula]] <br />
* [[arkeia]] (commercial)<br />
<br />
== monitoring ==<br />
<br />
* [[nagios]]<br />
* [[big brother/sister]]<br />
* [[munin]]<br />
* [[smokeping]] <br />
* [[monit]]<br />
* [[HotSaNIC]]<br />
* [[cricket]]<br />
* [[cacti]]<br />
<br />
== migrating servers ==<br />
<br />
* [[rsync/tar/cp]] -a <br />
<br />
== AAA ==<br />
<br />
* [[PAM/NSS]]<br />
* [[LDAP]]<br />
* [[RADIUS]]</div>Chhttp://amd.co.at/adminwiki/PAM/NSSPAM/NSS2006-05-25T03:10:35Z<p>Ch: Pam/nss moved to PAM/NSS</p>
<hr />
<div>PAM and NSS are the two services you have to mess around with if you need more than passwd and shadow. Both use backends as their data sources, and these usually have seperate configuration files. <br />
<br />
= Name Switch Service = <br />
<br />
NSS is, basically, the glibc resolver. It can resolve lots of different stuff for you: hosts, networks, protocol names, users, groups, etc. Data for the resolver can be provided by more than one backend, but not all backends make sense for every data type ("database" in NSS-speak).<br />
<br />
<tt>/etc/nsswitch.conf</tt> sets up which backends glibc will use to resolve a particular data type. The backends are distributed in form of Shared Objects, carrying the names /lib/libnss_*.so* . Which databases are available is doc'd in <tt>man nsswitch.conf</tt>. Changes to nsswitch.conf will require a restart of the affected applications.<br />
<br />
== Caching ==<br />
<tt>nscd</tt> provides the Name Service Caching Daemon, which caches the output of the resolver backends. Running nscd gets you a big speed improvement, but burns you when debugging NSS problems. In this case just stop nscd, your applications will revert to non-caching mode automatically. If you see strange behaviour or old data it's a good idea to restart nscd.<br />
<br />
== Troubleshooting Multicast DNS packets ==<br />
Newer glibcs ship with support for MDNS - multicast DNS. Useful if you have a home network, no working DNS, probably a .local domain and a recent Linux distro or OS X systems (Windows?). <br />
If you 'do' have a working DNS for your .local domain (e.g. company lan), this gets quite annoying (slow, unneeded network traffic, etc).<br />
<br />
The solution: remove <tt>mdns</tt> from the <tt>hosts:</tt> line in nsswitch.conf. Reboot, and you should have gotten rid of that mdns stuff. <br />
Sadly, some particular versions of glibc distributed by some vendors (SuSE) have <tt>mdns</tt> support built-in and it can't be disabled. Rumour has it, that there are now updated rpms which fix this.<br />
<br />
<br />
= Pluggable Authentication Modules =<br />
While NSS provides the lookup and mapping of your users, PAM will provide the way in itself (e.g. handling login, authentication and session setup). <br />
<br />
PAM is configured per-application, and applications will have to support it explicitly (most daemons now support pam, [[vsftpd]] NOT by default). It's config files are <tt>/etc/pam.conf</tt> (rather obsolete) and <tt>/etc/pam.d/{service name}</tt>.<br />
<br />
Debian provides a set of files called common-* which get included in every service configuration file. Modify them if you want to setup server-wide PAM. RedHat does not provide such a thing, but most configurations can be set up using the <tt>authconfig</tt> program.<br />
<br />
PAM backends are distributed as Shared Object files, located in /lib/security/pam_*.<br />
<br />
= Upgrade Troubles =<br />
Be careful when upgrading glibc and/or pam/nss backends: it's easy to shoot yourself. <br />
<br />
Usually running applications will have to be restarted after upgrading one of libc or backends, due to the nature of the integration of libc with the backends (libc dyna-links the backends at runtime and sometimes releases them - which will cause problems when it loads them again but are different versions then).</div>Chhttp://amd.co.at/adminwiki/PAM/NSSPAM/NSS2006-05-25T03:10:24Z<p>Ch: </p>
<hr />
<div>PAM and NSS are the two services you have to mess around with if you need more than passwd and shadow. Both use backends as their data sources, and these usually have seperate configuration files. <br />
<br />
= Name Switch Service = <br />
<br />
NSS is, basically, the glibc resolver. It can resolve lots of different stuff for you: hosts, networks, protocol names, users, groups, etc. Data for the resolver can be provided by more than one backend, but not all backends make sense for every data type ("database" in NSS-speak).<br />
<br />
<tt>/etc/nsswitch.conf</tt> sets up which backends glibc will use to resolve a particular data type. The backends are distributed in form of Shared Objects, carrying the names /lib/libnss_*.so* . Which databases are available is doc'd in <tt>man nsswitch.conf</tt>. Changes to nsswitch.conf will require a restart of the affected applications.<br />
<br />
== Caching ==<br />
<tt>nscd</tt> provides the Name Service Caching Daemon, which caches the output of the resolver backends. Running nscd gets you a big speed improvement, but burns you when debugging NSS problems. In this case just stop nscd, your applications will revert to non-caching mode automatically. If you see strange behaviour or old data it's a good idea to restart nscd.<br />
<br />
== Troubleshooting Multicast DNS packets ==<br />
Newer glibcs ship with support for MDNS - multicast DNS. Useful if you have a home network, no working DNS, probably a .local domain and a recent Linux distro or OS X systems (Windows?). <br />
If you 'do' have a working DNS for your .local domain (e.g. company lan), this gets quite annoying (slow, unneeded network traffic, etc).<br />
<br />
The solution: remove <tt>mdns</tt> from the <tt>hosts:</tt> line in nsswitch.conf. Reboot, and you should have gotten rid of that mdns stuff. <br />
Sadly, some particular versions of glibc distributed by some vendors (SuSE) have <tt>mdns</tt> support built-in and it can't be disabled. Rumour has it, that there are now updated rpms which fix this.<br />
<br />
<br />
= Pluggable Authentication Modules =<br />
While NSS provides the lookup and mapping of your users, PAM will provide the way in itself (e.g. handling login, authentication and session setup). <br />
<br />
PAM is configured per-application, and applications will have to support it explicitly (most daemons now support pam, [[vsftpd]] NOT by default). It's config files are <tt>/etc/pam.conf</tt> (rather obsolete) and <tt>/etc/pam.d/{service name}</tt>.<br />
<br />
Debian provides a set of files called common-* which get included in every service configuration file. Modify them if you want to setup server-wide PAM. RedHat does not provide such a thing, but most configurations can be set up using the <tt>authconfig</tt> program.<br />
<br />
PAM backends are distributed as Shared Object files, located in /lib/security/pam_*.<br />
<br />
= Upgrade Troubles =<br />
Be careful when upgrading glibc and/or pam/nss backends: it's easy to shoot yourself. <br />
<br />
Usually running applications will have to be restarted after upgrading one of libc or backends, due to the nature of the integration of libc with the backends (libc dyna-links the backends at runtime and sometimes releases them - which will cause problems when it loads them again but are different versions then).</div>Chhttp://amd.co.at/adminwiki/Novell_eDirectoryNovell eDirectory2006-05-25T02:48:12Z<p>Ch: </p>
<hr />
<div>There is no need for tuning eDirectory. Period. You can tune your hardware (e.g. get better disk io, more ram, etc), but thats it - eDirectory just runs, tunes itself, stays consistent.<br />
<br />
eDirectory (formerly known as NDS) is an X.500 compliant Directory Service and implements its own DAP (Directory Access Protocol) on top of NCP. As no one cares about X.500 or DAP today, you will probably want to use LDAP to access it. NLDAP is the LDAP server of eDirectory.<br />
<br />
= Gotchas =<br />
User Password handling is very different from other implementations. <br />
<br />
By default, userPassword gets mapped to a public/private key pair - and is therefore never stored in plain text. This makes it almost impossible to migrate away from eDirectory and keeping the user passwords.<br />
<br />
Also, newer versions of NDS support multiple authentication mechanisms (keyword: NMAS), so you should look into them.<br />
<br />
If you are running anything below NDS version 8.6, upgrade now. Mixing 7.x or earlier, 8.0, 8.5+ is *NOT* a good idea (altough it should work if your Master replica is 8.5).<br />
<br />
= Performance =<br />
As mentioned above, you usually dont tune eDirectory - there is no need to. Cache sizes etc. will depend on your tree size and get set automatically. It's easy to outperform OpenLDAP on the same hardware with a big directory tree.<br />
<br />
= Replication =<br />
Replication relies on a working time synchronization, so if you have problems with replication, check timesync/ntp. eDirectory can support read-only slaves, but there is no point of doing it (every login/connect will write back to the user object, so this only creates load). Better have read/write replicas everywhere - maybe filtered (not recommended if you dont know what you are doing).<br />
<br />
= Troubleshooting =<br />
NDS has a nice Web Tool, called iMonitor. Go use it. You can access logging, debug stuff, etc. using iMonitor or the DSTRACE command line tool (on NetWare: SET DSTRACE=ON, on other opsys: run (n)dstrace). DSTRACE understands lots of different flags. Of interest will usually be +LDAP and some of the default flags. Use +SYNC if you are checking Replication problems and pay attention to the time vectors.<br />
<br />
You can fix lots of problems by just running dsrepair in automatic mode. <br />
<br />
Have a look [http://support.novell.com/techcenter/articles/ana19990102.html here] and [http://support.novell.com/techcenter/articles/anp20010901.html here]. Most of the older NDS docs is still valid.<br />
<br />
= Dangers =<br />
<br />
dsrepair knows of some command line parameters named like -XK3, -XK4 etc. DO NOT USE THEM. They are dangerous and ''will cause data loss''. Still they are sometimes useful, but then you a) need a backup, b) need a backup of all your replicas, c) know what you are doing.<br />
<br />
Don't delete the Admin object (really: don't delete the last Admin object). If you do, you cannot restore it. Novell Technical Support can under some circumstances. If you have a NetWare server in your tree, you are lucky and won't need NTS: there are some NLMs floating around which can re-create an Admin object. Some of them are payware, some are free.<br />
You could ask [[User:Ch]] in this situation and he may dig out some self-written stuff for you.<br />
<br />
= Specialities on NetWare =<br />
NDS was (and still is) the Directory Service for NetWare. This creates a few specialities you should know;<br />
<br />
* Once NDS is running, you cannot unload DS.NLM any more, but have to reboot. Starting without NDS is possible by adding -NDB to the server command line.<br />
<br />
* The DS database files are located in SYS:_NETWARE, which you cannot access using NCP (from a client machine or an NCP client on the server). NLMs which dont use the redirector (= dont log in), can access this directory. Toolbox can, if you specify /nl (= no login).</div>Chhttp://amd.co.at/adminwiki/Web_DevelopmentWeb Development2006-05-25T02:29:24Z<p>Ch: </p>
<hr />
<div>Not really the scope of the average administrator, but often enough we're faced with writing smaller (or larger) applications for our own use or are forced to fix code botched together by more or less competent developers.<br />
<br />
<br />
= The choice =<br />
<br />
Although PHP might be your first choice when doing "web development" that doesn't mean it's the best one.<br />
<br />
Food for thought:<br />
<br />
* [http://tnx.nl/php PHP in contrast to perl]<br />
* [http://czth.net/pH/PHPSucks PHPSucks]<br />
* [http://www.ukuug.org/events/linux2002/papers/html/php/ Experiences of using PHP in large websites]<br />
<br />
== Unbiased facts: ==<br />
<br />
* PHPs availability is much better on "foreign" servers.<br />
* PHP is much easier to use than Perl for beginners, because you can just throw HTML in PHP files (or vice versa).<br />
* Perl is much cleaner than PHP.<br />
* Perls webserver integration sucks. Neither CGI, nor mod_perl nor fastcgi give you the ease of use of PHP.<br />
<br />
= PHP =<br />
<br />
== Administration == <br />
* Securing<br />
* Tuning (Zend Optimizer & replacements)<br />
<br />
== Programming ==<br />
<br />
* [[Smarty]]<br />
* [[PEAR]]<br />
<br />
= Perl =<br />
* FastCGI vs. mod_perl<br />
* Package Management (dh-make-perl, etc)<br />
<br />
<br />
= Python =<br />
<br />
= Java =<br />
* Tomcat</div>Chhttp://amd.co.at/adminwiki/Novell_eDirectoryNovell eDirectory2006-05-25T02:23:15Z<p>Ch: </p>
<hr />
<div>There is no need for tuning eDirectory. Period. You can tune your hardware (e.g. get better disk io, more ram, etc), but thats it - eDirectory just runs, tunes itself, stays consistent.<br />
<br />
eDirectory (formerly known as NDS) is an X.500 compliant Directory Service and implements its own DAP (Directory Access Protocol) on top of NCP. As no one cares about X.500 or DAP today, you will probably want to use LDAP to access it. NLDAP is the LDAP server of eDirectory.<br />
<br />
= Gotchas =<br />
User Password handling is very different from other implementations. <br />
<br />
By default, userPassword gets mapped to a public/private key pair - and is therefore never stored in plain text. This makes it almost impossible to migrate away from eDirectory and keeping the user passwords.<br />
<br />
Also, newer versions of NDS support multiple authentication mechanisms (keyword: NMAS), so you should look into them.<br />
<br />
If you are running anything below NDS version 8.6, upgrade now. Mixing 7.x or earlier, 8.0, 8.5+ is *NOT* a good idea (altough it should work if your Master replica is 8.5).<br />
<br />
= Performance =<br />
As mentioned above, you usually dont tune eDirectory - there is no need to. Cache sizes etc. will depend on your tree size and get set automatically. It's easy to outperform OpenLDAP on the same hardware with a big directory tree.<br />
<br />
= Replication =<br />
Replication relies on a working time synchronization, so if you have problems with replication, check timesync/ntp. eDirectory can support read-only slaves, but there is no point of doing it (every login/connect will write back to the user object, so this only creates load). Better have read/write replicas everywhere - maybe filtered (not recommended if you dont know what you are doing).<br />
<br />
= Troubleshooting =<br />
NDS has a nice Web Tool, called iMonitor. Go use it. You can access logging, debug stuff, etc. using iMonitor or the DSTRACE command line tool (on NetWare: SET DSTRACE=ON, on other opsys: run (n)dstrace). DSTRACE understands lots of different flags. Of interest will usually be +LDAP and some of the default flags. Use +SYNC if you are checking Replication problems and pay attention to the time vectors.<br />
<br />
You can fix lots of problems by just running dsrepair in automatic mode. <br />
<br />
Have a look [http://support.novell.com/techcenter/articles/ana19990102.html here] and [http://support.novell.com/techcenter/articles/anp20010901.html here]. Most of the older NDS docs is still valid.<br />
<br />
''Danger'': dsrepair knows of some command line parameters named like -XK3, -XK4 etc. DO NOT USE THEM. They are dangerous and ''will cause data loss''. Still they are sometimes useful, but then you a) need a backup, b) need a backup of all your replicas, c) know what you are doing.</div>Chhttp://amd.co.at/adminwiki/DatabasesDatabases2006-05-25T02:07:01Z<p>Ch: /* LDAP */</p>
<hr />
<div>= SQL =<br />
<br />
== [[MySQL]] ==<br />
<br />
Database that has a SQL interface. Claims to be ACID<ref name="acid">Atomicity, Consistency, Isolation, Durability. See [http://databases.about.com/od/specificproducts/a/acid.htm The ACID model]</ref>-compliant; tries hard to. Same story as with [[Web Development|PHP]]: Large availability, everybody uses it, hardly the best choice.<br />
<br />
Food for thought:<br />
<br />
* [http://sql-info.de/mysql/gotchas.html MySQL Gotchas]<br />
* [http://www.databasejournal.com/features/mysql/article.php/3519116 MySQL oddities]<br />
* [http://drbrain.livejournal.com/61705.html MySQL Sucks]<br />
* [http://www.andrewsavory.com/blog/archives/000266.html MySQL sucks]<br />
* [http://habtm.com/articles/2005/05/01/tagging-again-mysql-subselects-suck MySQL subselects suck]<br />
<br />
== [[PostgresSQL]] ==<br />
<br />
Full-featured SQL Server. Is ACID<ref name="acid">a</ref>-compliant. Needs more maintenance than MySQL.<br />
<br />
Food for thought:<br />
<br />
* [http://sql-info.de/postgresql/postgres-gotchas.html PostgresSQL gotchas]<br />
<br />
== Oracle ==<br />
<br />
The commercial behemoth.<br />
<br />
= LDAP =<br />
LDAP systems are like Databases too - often they build on a generic database and provide a more specialised view.<br />
<br />
* [[OpenLDAP/slapd]]: the open source LDAP daemon<br />
* [[Novell eDirectory|Novell eDirectory/NLDAP]]: a commercial system<br />
<br />
= Footnotes =<br />
<br />
<references/></div>Chhttp://amd.co.at/adminwiki/File:Icon-about.pngFile:Icon-about.png2006-05-25T01:32:44Z<p>Ch: </p>
<hr />
<div></div>Chhttp://amd.co.at/adminwiki/Main_PageMain Page2006-05-25T01:27:40Z<p>Ch: </p>
<hr />
<div>== The admin wiki. ==<br />
<br />
<table width=100%><br />
<tr><td valign=top><br />
<br />
<table width=50%><br />
<tr><br />
<td>[[Image:Icon-operating_systems.png]]</td><br />
<td>[[Operating Systems]]<br>Linux, *BSD, Solaris, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-networking.png]]</td><br />
<td>[[Networking]]<br>Remote Boot, Firewall, VPN, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-tools.png]]</td><br />
<td>[[Tools]]<br>CLI Tools, Editors, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-software_solutions.png]]</td><br />
<td>[[Software Solutions]]<br>Monitoring, Backup, AAA, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-hardware.png]]</td><br />
<td>[[Hardware]]<br>Servers, Storage, Management, ...</td><br />
</tr><tr><br />
<td><br></td><br />
</tr><tr><br />
<td>[[Image:Icon-about.png]]</td><td>[[AdminWiki:About|About]]<br>Motivation, the guys behind the scenes</td><br />
</tr><br />
</table><br />
<br />
</td><br />
<td valign=top><br />
<br />
<table width=50%><br />
<tr><br />
<td>[[Image:Icon-daemons.png]]</td><br />
<td>[[Daemons|Daemons and Services]]<br>HTTP, SMTP, IMAP, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-databases.png]]</td><br />
<td>[[Databases]]<br>MySQL, PostgresSQL, LDAP, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-webdevelopment.png]]</td><br />
<td>[[Web Development]]<br>Perl, PHP, FastCGI, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-clustering.png]]</td><br />
<td>[[Clustering|Clustering and HA]]<br>Load Balancing, Heartbeat, Redundancy, ...</td><br />
</tr><tr><br />
<td>[[Image:Icon-best_common_practices.png]]</td><br />
<td>[[Best common practices]]<br>If that would be my server, I would've done this differently...</td><br />
</tr><tr><br />
<td><br></td><br />
</tr><tr><br />
<td>[[Image:Icon-todo.png]]</td><td>[[Todo|Lots of work to be done]]<br>the long list</td><br />
</tr><br />
</table><br />
<br />
</td><br />
</tr><br />
</table></div>Chhttp://amd.co.at/adminwiki/OpenLDAP/slapdOpenLDAP/slapd2006-05-24T15:46:53Z<p>Ch: Databases/OpenLDAP moved to OpenLDAP/slapd</p>
<hr />
<div>[http://www.openldap.org/ OpenLDAP] is an open source implementation of LDAP utilities, client libraries and a server daemon, which is called <tt>slapd</tt>.<br />
<br />
<br />
slapd supports various backends, which include bdb and gdbm, as well as a Perl and SQL backend. bdb and gdbm will store data in slapds own way, while Perl can be used to acquire data from complex external data sources. slapd can host multiple backends in one process - you can do that, but it complicates everything, and this is not what you want when messing with LDAP.<br />
<br />
To manipulate the slapd database, there are two set of tools:<br />
* <tt>ldap*</tt>, which use the standard LDAP protocol<br />
* <tt>slap*</tt>, which manipulate the database directly, and therefore cannot be used against a running slapd (except for slapcat, but this is unsafe, too)<br />
<br />
LDIF files produced by either set of tools usually cannot be imported using the other without modifications. From slapcat to ldapadd these modifications are merely removing the operational attributes (e.g. <tt>egrep -v "(modifiersName:|modifyTimestamp:|entryCSN:|entryUUID:|creatorsName:|createTimestamp:|structuralObjectClass:)"</tt>).<br />
<br />
<br />
When using the bdb backend, the usual bdb notes apply: be careful with your database. For slapd this means:<br />
* dont expect > 1000 entries to be extremly fast<br />
* make regular (plain-text) backups of your database (better run them more than once a day if you have lots of entries)<br />
<br />
For a large setup, you could even go as far as dumping your database every night, and then reimporting it.<br />
<br />
= Upgrading from 2.1 =<br />
Prepare to re-roll your database. Usually you will need to switch to the new schema files supplied with OpenLDAP 2.3, and usually your dataset will not be compatible with them. Takes >4h for a reasonable set of data.<br />
<br />
= Upgrading from 2.2 =<br />
Most often only your schema files need updates/fixes, so this could be easy - or tricky.<br />
<br />
= Upgrading from/to other LDAP Servers =<br />
The only real problem will be the password attribute: some Directory implementations dont store it in string form (e.g. public/private key only), others dont store it cleartext (e.g. OpenLDAP by default).<br />
<br />
Beneath that, this should be a matter of an ldapsearch against the old server and an ldapadd against the new one.</div>Chhttp://amd.co.at/adminwiki/DatabasesDatabases2006-05-24T15:46:46Z<p>Ch: /* Specialised Databases */</p>
<hr />
<div>= Generic Databases = <br />
* MySQL<br />
* Postgres<br />
<br />
= Specialised Databases =<br />
LDAP systems are like Databases too - often they build on a generic database and provide a more specialised view.<br />
<br />
* [[OpenLDAP/slapd]]: the open source LDAP daemon</div>Chhttp://amd.co.at/adminwiki/DaemonsDaemons2006-05-24T15:23:52Z<p>Ch: </p>
<hr />
<div>= http =<br />
* [[lighttpd]]<br />
* [[Apache httpd]]<br />
<br />
= ftp =<br />
* [[ProFTPD]]<br />
* [[vsftpd]]<br />
<br />
= smtp =<br />
* [[Exim]]<br />
* [[Postfix]]<br />
<br />
= imap / pop3 / mail access =<br />
* [[Dovecot]]<br />
* [[Courier]]<br />
<br />
= File Sharing =<br />
* [[CIFS|CIFS and Samba]]<br />
* [[NFS]]<br />
<br />
= ident =<br />
<br />
= ntp =<br />
The [http://www.pool.ntp.org/ NTP Server pool] exists which you could use, but shouldn't on production-class systems. Better query one of these instead:<br />
* ptbtime1.ptb.de<br />
* time.nist.gov<br />
* ts1.univie.ac.at<br />
<br />
__NOTOC__</div>Chhttp://amd.co.at/adminwiki/VsftpdVsftpd2006-05-24T15:23:07Z<p>Ch: Daemons/vsftpd moved to Vsftpd</p>
<hr />
<div>[http://vsftpd.beasts.org/ Official Homepage]</div>Chhttp://amd.co.at/adminwiki/DaemonsDaemons2006-05-24T15:21:25Z<p>Ch: /* imap / pop3 / mail access */</p>
<hr />
<div>= http =<br />
* [[Daemons/lighttpd|lighttpd]]<br />
* [[Daemons/Apache|Apache]]<br />
<br />
= ftp =<br />
* [[Daemons/ProFTPD|ProFTPD]]<br />
* [[Daemons/vsftpd|vsftpd]]<br />
<br />
= smtp =<br />
* [[Daemons/Exim|Exim]]<br />
* [[Daemons/Postfix|Postfix]]<br />
<br />
= imap / pop3 / mail access =<br />
* [[Dovecot]]<br />
* [[Courier]]<br />
<br />
= File Sharing =<br />
* [[CIFS|CIFS and Samba]]<br />
* [[NFS]]<br />
<br />
= ident =<br />
<br />
= ntp =<br />
The [http://www.pool.ntp.org/ NTP Server pool] exists which you could use, but shouldn't on production-class systems. Better query one of these instead:<br />
* ptbtime1.ptb.de<br />
* time.nist.gov<br />
* ts1.univie.ac.at<br />
<br />
__NOTOC__</div>Ch