Monday, October 26, 2015

10-26-2015 Kivy and API (status: in progress)


I need an API for taking tests and storing tests. Mongo, redis, and other technologies can wait. When I get to the point of saving test results, I will need Mongo. Probably for test purposes will just have a non-sharded mongo.

Kivy is already setup. This will get the basic API going which involves MySQL, and python API for kivy to use, and the programs kivy makes. We redo the kivy installation steps in the virtual environment for another Linux user account. 
  1. virtualenv --no-site-packages kivyinstall
  2. . kivyinstall/bin/activate
  3. mkdir Kivy
  4. cd Kivy
  5. pip install Cython==0.21.2
  6. pip install kivy
  7. pip install git+https://github.com/kivy/buildozer.git@master
  8. pip install git+https://github.com/kivy/plyer.git@master
  9. pip install -U pygments docutils
  10. ln -s ~/kivyinstall/share/kivy-examples kivy-examples
  11. rm /usr/bin/kivy # as root
  12. ln -s /home/test/kivyinstall/bin/python /usr/bin/kivy # as root
  13. # bulldozr already installed
  14. buildozer init
  15. buildozer android debug deploy run
  16. git clone git://github.com/kivy/python-for-android
  17. cd python-for-android/
  18. ./distribute.sh -m "pil openssl kivy"
    1. Ran into an error. seomthing with the wrong version of bulldozer. 
    2. Ha to rebuild bulldozer as root, remove the builddozer spec file in the normal account, reinit bulldozer
    3. Will try this again... takes a while to compile everything
    4. This no longer works, redoing python for andriod from the readme file
  19. pip install git+https://github.com/kivy/python-for-android.git
  20. python-for-android recipes # test
    1. python-for-android or p4a are the executables
  21. pip install jnius

Now making the program....

Learning is first:
  1. http://kivy.org/docs/tutorials/firstwidget.html
  2. http://kivy.org/docs/tutorials-index.html
  3. https://www.quora.com/What-is-a-good-resource-to-learn-Kivy
  4. http://kivy.org/docs/gettingstarted/intro.html
  5. http://kivy.org/docs/gettingstarted/examples.html
Stuff to use:
  1. http://kivy.org/docs/api-kivy.uix.textinput.html
    1. Inputting an answer. 
  2. http://kivy.org/docs/api-kivy.uix.togglebutton.html
    1. For selecting which answer is correct. 
  3. http://kivy.org/docs/api-kivy.uix.scrollview.html
  4. http://kivy.org/docs/api-kivy.uix.tabbedpanel.html 
    1. one window for account
    2. one for selecting test
    3. one for taking test
    4. one for making test
  5. http://kivy.org/docs/api-kivy.clock.html -- timed tests
  6. http://kivy.org/docs/api-kivy.core.camera.html -- cusotmize account


I have to do the stuff below. First thing, I have to download docs:

  • Download docs and install on local laptop: DONE
    • python
    • apache
    • kivy documentation
      • https://github.com/kivy/kivy/blob/master/doc/README.md
        • apt-get install python-sphinx
        • pip install sphinxcontrib-blockdiag sphinxcontrib-seqdiag
        • pip install sphinxcontrib-actdiag sphinxcontrib-nwdiag
        • download: https://github.com/kivy/kivy/archive/master.zip
          • unzip kivy-master.zip 
          • apt-get install Cython
          • cd kivy-master
          • make html --- this  failed damn it, skipping. have\ pdf version. 
From here on out, start kivy with ". kivyinstall/bin/activate" to start the kivy session.

Understand:
  1. Widget: http://kivy.org/docs/api-kivy.uix.widget.html
    1. Everything is a widget. 
  2. Layouts: http://kivy.org/docs/gettingstarted/layouts.html
  3. Objects:
    1. http://kivy.org/docs/api-kivy.html
      1. Focus on the kivy.uix.* modules. Those are the objects, like buttons. 
        1. Bubble: http://kivy.org/docs/api-kivy.uix.bubble.html
        2. Button: http://kivy.org/docs/api-kivy.uix.button.html
        3. Action Bar: http://kivy.org/docs/api-kivy.uix.actionbar.html
          1. To me this is a slicker way of making the top bar. 
        4. anchor lauout: http://kivy.org/docs/api-kivy.uix.anchorlayout.html
          1. Useful for positioning basic stuff. 
        5. box layout:http://kivy.org/docs/api-kivy.uix.boxlayout.html
          1. Useful for adding buttons. 
        6. Checkbox: http://kivy.org/docs/api-kivy.uix.checkbox.html
          1. Useful for choosing one answer. 
        7. Drop down list: http://kivy.org/docs/api-kivy.uix.widget.html
          1. Probably the default for the top bar. 
        8. Image: http://kivy.org/docs/api-kivy.uix.image.html
          1. not immediately useful, but for later. 
        9. popup: http://kivy.org/docs/api-kivy.uix.image.html
          1. Messages to popup. Can be buttons. 
        10. screens: http://kivy.org/docs/api-kivy.uix.screenmanager.html
          1. Have 4 screen initially, for different purposes, and they don't die. You should switch between them. 
        11. When looking at examples, do this:
          1. Download exmaples
          2. cd kivy-examples/demo/showcase
          3. kivy main.py
          4. Look at the root source code. This example is the one example to get you started. There are lots of other good examples, but this one is great. 

  1. Make API
    1. http://kivy.org/docs/api-index.html
    2. Login API
  2. Connection to MySQL database
    1. add session
    2. add cookie
  3. Connect to mongo to save results

DOING

Making the android version...

DOING

Running on android server..

DOING

Tuesday, October 13, 2015

10-13-2015 Misc things for kivy (status: done)

There are a few things I need to do before I finish the kivy project. I was sidelined doing a bunch of jiujitsu documentation, and now I will make a second application for that.

SUMMARY: Most of this was a waste of time, but still holds promise. Mongo and Redis is not needed right away. The API will in the next article.

  1. MariaDB + maxscale: will store tests.
    1. Turns out maxscale can only shard by database. Thus, if you have to put knowledge in the software for sharding. Thus, we will install it for testing purposes, but will get rid of it fast. HAProxy and making your own scripts for it seem to work fine already. I don't know if a failover occurs if it will redo the slaving like HA proxy does. 
    2. CONCLUSION: Not doing maxscale. Hoping shard query is better. 
  2. MariaDB + ShardQuery
    1. BTW, not doing mariadb cluster for now, just master/slave for simplicity and testing. 
    2. https://mariadb.com/kb/en/mariadb/shard-query/
    3. http://shardquery.com/
  3. MonogDB sharded: will store answers. 
  4. Redis
  5. An API will be used to upload tests, do answers, and anything else. Use REDIS. 
  6. First on laptops, then AWS. 
As  extras:
  1. memcache --- nosql to mysql. 
  2. 3rd caching alternative. 
  3. Instead of Hadoop: Voltd, Vertica, and Spark (I like spark), also R
    1. Spark
      1. https://www.xplenty.com/blog/2014/11/apache-spark-vs-hadoop-mapreduce/
      2. http://spark.apache.org/
      3. https://en.wikipedia.org/wiki/Apache_Spark
      4. Python and R
      5. Unfortunately it is still JVM dependent, but written in SCALA. 
      6. Is it brain dead to manage? Hadoop is a nightmare. 
    2. R stats language
      1. To replace SAS. It works well with Spark and VerticaDB. 
  4. Mongo from Percona with the two other engines --- necessary? Are the engines useful?
    1. WiredTiger (will be the default from Mongo)
    2. RocksDB and PerconaFT will have to be tested to see if they are any good or fluff. 

MariaDB an Maxscale on laptops


      • You may have to add the repository manually. 
        mark5 apt # more sources.list.d/additional-repositories.list 
        deb http://ftp.kaist.ac.kr/mariadb/repo/10.0/ubuntu trusty main
        mark5 apt # pwd
        /etc/apt
        
    • MaxScale
      • We assume server "mark2" is the master, and mark4 and mark5 are slaves. 
        • mark2:
          • mysql client:
            GRANT REPLICATION SLAVE ON *.* TO 'rep'@'%' IDENTIFIED BY 'dumb_password';
            flush privileges;
            
          • /etc/mysql/my.cnf
            server-id = 2
            log_bin = /var/log/mysql/mysql-bin.log
            #bind-address           = 127.0.0.1
            
            
        • mark4:
          • mysql client (since we are starting fresh we can use the first binlog):
            
            CHANGE MASTER TO MASTER_HOST='192.168.1.209',
              MASTER_USER='rep', 
              MASTER_PASSWORD='dumb_password', 
              MASTER_LOG_FILE='mysql-bin.000003',
              MASTER_LOG_POS=  365;
            start slave;
            show slave status\G
            
            
            
          • /etc/mysql/my.cnf
            server-id = 4
            relay-log  = /var/log/mysql/mysql-relay-bin.log
            #bind-address           = 127.0.0.1
            
        • mark5:
          • mysql client (since we are starting fresh we can use the first binlog):
            
            CHANGE MASTER TO MASTER_HOST='192.168.1.209',
              MASTER_USER='rep', 
              MASTER_PASSWORD='dumb_password', 
              MASTER_LOG_FILE='mysql-bin.000003',
              MASTER_LOG_POS=  365;
            start slave;
            show slave status\G
            
            
          • /etc/mysql/my.cnf
            server-id = 5
            relay-log   = /var/log/mysql/mysql-relay-bin.log
            #bind-address           = 127.0.0.1
            
      • You have to log in or send your email to get it. Annoying.
      • wget https://downloads.mariadb.com/enterprise/wd3x-hx24/mariadb-maxscale/1.2.1/ubuntu/dists/trusty/main/binary-amd64/maxscale-1.2.1-1.ubuntu_trusty.x86_64.deb
        • I had linux mint 7.1 which based on Ubuntu 14.4. 
      • Read the docs: 
      • dpkg -i maxscale-1.2.1-1.ubuntu_trusty.x86_64.deb 
      • maxkeys /var/lib/maxscale/
      • maxpasswd /var/lib/maxscale/ `openssl rand -base64 32`
        • Remember the password this creates. You will need it for your config files. 
      • mkdir -p /data/maxscale/data
        mkdir -p /data/maxscale/cache
        
        # In my client on master
        create user 'maxscale'@'192.168.1.%' identified by 'dumbpassword';
        grant SELECT on mysql.user to 'maxscale'@'192.168.1.%';
        grant SELECT on mysql.db to 'maxscale'@'192.168.1.%';
        grant SHOW DATABASES on *.* to 'maxscale'@'192.168.1.%';
        flush privileges;
        
        create database if not exists maxscale_test;
        grant all privileges on maxscale_test.* to rw@'192.168.1.%' identified by 'rw';
        grant select on maxscale_test.* to ro@'192.168.1.%' identified by 'ro';
        grant all privileges on maxscale_test.* to rw@localhost identified by 'rw';
        grant select on maxscale_test.* to ro@localhost identified by 'ro';
        
        
        
      • Use the maxscale config file below, but chang the hostnames and passwords. I have two slaves.
      • Results:
        • I was able to insert data into the RR session using a mysql account that had write permission. I thought this was wrong. 
        • When connecting to RW, I thought it was suppose to split reads across every server and writes to the master only. I was able to insert data on a slave./ 
        • When the master was brought down, no new sessions could be made. Good. As expected. 
        • When I removed router_options and shutdown the master, only RW sessions couldn't be made but I could still insert data to the RR session. 
        • In order to get failover to work you have to make external scripts. 
        • Conclusion: Not better than HAproxy and either I was doing something wrong or the RW and RR sessions weren't bulletproof. Will have to do this again. Moving onto Shard Query. 

Contents of /etc/maxscale.cnf
A lot of this was stolen from: http://www.severalnines.com/blog/deploy-and-configure-maxscale-sql-load-balancing
[maxscale]
threads=4
logdir=/tmp/
datadir=/data/maxscale/data
cachedir=/data/maxscale/cache
piddir=/data/maxscale



# This is the galera monitor which monitors the mysql services (servers) and may do a failover. 
[Galera Monitor]
type=monitor
module=galeramon
servers=mark2,mark4,mark5
user=maxscale
passwd=2235D8D837C1E22E8043D73ABED6B0B7
monitor_interval=10000
disable_master_failback=1

[qla]
type=filter
module=qlafilter
options=/tmp/QueryLog

[fetch]
type=filter
module=regexfilter
match=fetch
replace=select

# Specify which servers get the write queries and which ones are the slaves. This will split the queries so that read only queries will go to the slaves. I believe the first listed server has to be the master but I am guessing it auto detects this.  
[RW]
type=service
router=readwritesplit
servers=mark2,mark4,mark5
user=rw
passwd=rw
max_slave_connections=100%
router_options=slave_selection_criteria=LEAST_CURRENT_OPERATIONS

# Specify which services can do selects. We include the master. synced isn't explained very well. I think if the slaves are in sync then its uses them. 
[RR]
type=service
router=readconnroute
router_options=synced
servers=mark2,mark4,mark5
user=ro
passwd=ro

[Debug Interface]
type=service
router=debugcli

[CLI]
type=service
router=cli


# The write listener you should put your applications at. 
[RWlistener]
type=listener
service=RW
protocol=MySQLClient
address=192.168.1.209
port=3307

# The read only listener you should put your applications at. 
[RRlistener]
type=listener
service=RR
protocol=MySQLClient
address=192.168.1.209
port=3308

[Debug Listener]
type=listener
service=Debug Interface
protocol=telnetd
address=127.0.0.1
port=4442

[CLI Listener]
type=listener
service=CLI
protocol=maxscaled
address=127.0.0.1
port=6603


# We define the servers here. The first one is the master. 
[mark2]
type=server
address=mark2
port=3306
protocol=MySQLBackend

[mark4]
type=server
address=mark4
port=3306
protocol=MySQLBackend

[mark4]
type=server
address=mark5
port=3306
protocol=MySQLBackend

Shard Query

This will have to work to compete with mongo in making a shardable easy service. For my immediate two projects, I can use Mongo, so I hope this works or this will fail. It may be possible to
use MariaDB cluster, but there is still a limitation in scaling for it, though its better.

Summary: Couldn't get it to work smooth. Still seems beta after all these years. Aborting for now because it is unlikely we will need it right away. Don't mind sharding later.

Non-detailed instructions.
https://github.com/greenlion/swanhart-tools/wiki/Installing-Shard-Query

  1. http://www.hostingadvice.com/how-to/install-gearman-ubuntu-14-04/
    1. sudo apt-get install python-software-properties # already installed
    2. sudo add-apt-repository ppa:gearman-developers/ppa
    3. sudo apt-get update
    4. sudo apt-get install gearman-job-server libgearman-dev
    5. sudo apt-get upgrade
    6. sudo apt-get install php-pear php5-dev gearman
    7. Edit /etc/php5/cli/php.ini
      1. extension=gearman.so # under “Dynamic Extensions”
    8. edit /etc/php5/apache2/php.ini
      1. extension=gearman.so # under dynamic extensions
      2. sudo service apache2 restart
    9. Try tests: http://gearman.org/getting-started/#client
      1. The first test just stalled. 
      2. these should be the same result
        1. gearman -f wc < /etc/passwd
        2. wc -l < /etc/passwd
      3. echo '<?php
        print gearman_version() . "\n";
        ?>
        ' > /var/www/html/test1.php
        
      4. php /var/www/html/test1.php
      5. Add to file and reexecute.
        <?php
        $worker= new GearmanWorker();
        $worker->addServer();
        $worker->addFunction("reverse", "my_reverse_function");
        while ($worker->work());
        
        function my_reverse_function($job)
        {
          return strrev($job->workload());
        }
        ?>
      6. Do the other examples on this page. 
    10. Considering is it very unliekly a good server would need to be sharded for what I want, we will skip this.

Mongo 3.0

My main concern here is when a shard cluster goes offline or a single secondary goes offline. Does it force a reconnect for all the mongo connections. The answer should be no. In any event, never trust the software and there should be one cluster for login, one cluster for people to work, and one clsuter for results, one for reports, etc. If any cluster goes down, you can still work on the others. Clusters should be tied to a product/feature and features/products should be as independent of each other as possible. Separate super critical, critical, and non-critical data into their own clusters too, for Mongo or MySQL.

ABORTING FOR NOW. We will store test results here. API needs to get done.


Redis

ABORTING FOR NOW. Until the product is up and running no point.....

API

This involved a backend to be made for the API. But I know what I want. This will be in the next article.

Sunday, September 6, 2015

9-06-2015 Get Kivy to make an android app on my phone using Dalvik (status: done)

Unfortunately my phone won't upgrade to 5.0 so I can use ART. I think once I fix my phone i will switch to ART. The reason is that I am probably a year away from really using it in prod, and a year from now most everything will have ART. I don't care about apple products for cost reasons and target audience.

The nice thing is I might be able to write a program that runs on a phone, tablet, Linux, Windoze, and iOS. It is not going to be an intensive program.


sudo add-apt-repository ppa:kivy-team/kivy
  # I want python2, I hate python3
sudo apt-get install python-kivy

sudo apt-get install kivy-examples # didn't work

  #Had to remove ffmpeg. On Linut Mint, hope it doesn't make a difference compared to ubuntu. 
sudo apt-get install -y     python-pip     build-essential     mercurial     git     python     python-dev          libsdl2-dev     libsdl2-gfx-dev     libsdl2-image-dev     libsdl2-mixer-dev     libsdl2-net-dev     libsdl2-ttf-dev     libportmidi-dev     libswscale-dev     libavformat-dev     libavcodec-dev     zlib1g-dev

sudo pip install --upgrade pip virtualenv setuptools


virtualenv --no-site-packages kivyinstall

  ## I guess this puts us in the virtual environment. 
. kivyinstall/bin/activate

pip install Cython==0.21.2
pip install kivy


pip install git+https://github.com/kivy/buildozer.git@master
pip install git+https://github.com/kivy/plyer.git@master
pip install -U pygments docutils


  # We need to know where the examples are. 
python -c "import pkg_resources; print(pkg_resources.resource_filename('kivy', '../share/kivy-examples'))"

# Even though the output was: /root/kivyinstall/local/lib/python2.7/site-packages/kivy/../share/kivy-examples
# my path was : /root/kivyinstall/share/kivy-examples/
ln -s /root/kivyinstall/share/kivy-examples kivy-examples
cd kivy-examples
cd demo/touchtracer
python main.py

cd ../pictures
python main.py

   # Dumb I am root, but this is a test box. 
   # This makes it so the beginning of the scripts can be #!/usr/bin/kivy
ln -s /root/kivyinstall/bin/python /usr/bin/kivy



Packaging for android
git clone https://github.com/kivy/buildozer.git
cd buildozer
sudo python2.7 setup.py install


buildozer init
buildozer android debug deploy run

git clone git://github.com/kivy/python-for-android
./distribute.sh -m "pil openssl kivy"



Had to do some work on apache and git on my local box when pushing to production.
Locally:

  • cd /var/www/html/
  • rm index.html
  • mkdir DIR
  • chown work DIR
  • In the apache config: /etc/apache2/sites-enabled/000-default.conf
    • <Directory "/var/www/html/hhcf/">
      AddHandler cgi-script .cgi .py
      AllowOverride All
      Options +Indexes +FollowSymLinks +ExecCGI
      Order allow,deny
      Allow from all
      </Directory>
  • a2enmod cgi a2enmod status a2enmod info
  • Restart apache
For git I created a bash script to save stuff quickly. I don't care about comments because its all me. I just called it "G" with chmod 755 /usr/local/bin/G.


#!/bin/sh

echo $1
git add $1 
git commit -m "auto"
git push origin master





Wednesday, September 2, 2015

9-02-2015 VoltDB community install on laptops (status : stuck)

I am going to install the community version of VoltDB. This means you have to compile it.Hope this changes. I will try to get the enterprise edition somehow.




sudo apt-get -y install ant build-essential ant-optional default-jdk python \
    valgrind ntp ccache git-arch git-completion git-core git-svn git-doc \
    git-email python-httplib2 python-setuptools python-dev apt-show-versions


git clone https://github.com/VoltDB/voltdb.git

cd voltdb
ant

  # following instructions on 2.2.3 ad then 2.2 to install. 
ant dist

tar -zxvf  ./obj/release/voltdb-5.6.tar.gz -C /opt
ln -s /opt/voltdb-5.6 /opt/voltdb

  # Puts the binaries in the path of all accounts. 
  # I don't like having to modify each path in each account. 
  # You could modify an etc file for the path I guess. 
cd /opt/voltdb/bin
for i in `ls`; do echo $i; rm -f /usr/local/bin/$i; ln -s /opt/voltdb/bin/$i /usr/local/bin/$i; done

  # This turns off something in memory. 
echo never >/sys/kernel/mm/transparent_hugepage/enabled

  # Turn off swapiness.

echo "" >>  /etc/sysctl.conf
echo "vm.swappiness=10" >> /etc/sysctl.conf
echo "" >>  /etc/sysctl.conf


  # Add the following to /etc/rc.local
#!/bin/bash
for f in /sys/kernel/mm/*transparent_hugepage/enabled; do
    if test -f $f; then echo never > $f; fi
done
for f in /sys/kernel/mm/*transparent_hugepage/defrag; do
    if test -f $f; then echo never > $f; fi
done 

  ### then reboot


sudo apt-get install screen  # Add this to my server setup. 
sudo apt-get install tmux


reboot


On server mark4:

  • voltdb create # as the user mark 
    • I don't know if its because its a community edition or what, but it spit out some messages.
      Build: 5.6 voltdb-5.5-158-gdb3a5fd Community Edition
      Connecting to VoltDB cluster as the leader...
      Host id of this node is: 0
      Initializing the database. This may take a moment...
      WARN: Strict java memory checking is enabled, don't do release builds or performance runs with this enabled. Invoke "ant clean" and "ant -Djmemcheck=NO_MEMCHECK" to disable.
      WARN: This is not a highly available cluster. K-Safety is set to 0.
      WARN: Durability is turned off. Command logging is off.
      Server completed initialization.
      
  • Now I need some sort of license, so I am stuck because I want to setup a cluster. Do I need a community license?

Thursday, August 27, 2015

8-27-2015 Lazy and education (status: done)

I have been lazy at doing stuff because I was reading a lot. I also condenses my blogs that had no useful information. For this blog, I will have something useful for the Hip Hop Chess Federation indirectly. And I have decided to do things manually for laptops at home only, but before they get pushed to production they must be in salt or dad or both. I need to learn about these technologies. More important than salt and dad, which is a lot of work. Stuff will be setup as how I would envision salt or dad doing it ultimately.

 http://www.menprojects.com/projects/hhcf/HHCF.py

Update: rather than new blog, also made a webpage to upload file and have it processed. Next, I am going to try to make a very basic Android based application written in Python.

Sunday, August 23, 2015

8-23-2015: Adding more laptops at home for 4 computer clusters (DONE)

I have 3 laptops at home. One was for my tv and one was an old crappy laptops. I don't want to use them for real work. So I got 2 more laptops and hooked them up to my monitor that has 2 hdmi ports. Now I have a cluster. Here is the steps I used to install.



  • Install Linux
    • Download the last Linux Mint and burn it.
    • Put cdrom in external cdrm drive first. Bios might not detect this. 
    • Reboot into bios and make the cdrom the first bootable device. 
    • Remove secure boot in BIOS.
    • Reboot 
    •  When it comes to partitions:
      • 100 megs for EDI parition
      • 100 meg for swap, I know its an overkill. 
      • 10000 megs for /BACKUP
      • rest for /
  • Post Install STEP 1
    • In the control panel, for power management, set it to do nothing when the laptop is closed and turn off any power management after a time.
    • Run  the script I have below. 
    • In screensaver, make bsod the default and no locking when screensaver is activated. Who cares. This is for testing. 
  • Post Install Step 2 . 
    • Add the two systems to DNS 
    • Install DNS on the two systems
    • Exchange SSH keys. 
    • Install SALT on those two systems. 
  • Then for the future, I have a cluster of 3 servers for mono, vertica, voltdb, MySQL and Galera, and maybe MaxScale.



DNS entry.

ns3   IN A 192.168.1.30
mark4 IN CNAME ns3
ns4   IN A 192.168.1.179
mark5 IN CNAME ns4

Install script


echo "deb http://us.archive.ubuntu.com/ubuntu/ trusty multiverse" >> /etc/apt/sources.list
echo "deb-src http://us.archive.ubuntu.com/ubuntu/ trusty multiverse " >> /etc/apt/sources.list
echo "deb http://us.archive.ubuntu.com/ubuntu/ trusty-updates multiverse " >> /etc/apt/sources.list
echo "deb-src http://us.archive.ubuntu.com/ubuntu/ trusty-updates multiverse " >> /etc/apt/sources.list

apt-get update

   # Unfortunately, you will have to answer questions for some of these. 
apt-get -y install emacs  tripwire nmap lynx curl vnstat ttf-mscorefonts-installer
apt-get -y xscreensaver-data-extra xscreensaver-gl-extra xscreensaver-screensaver-bsod
apt-get -y  ncftp openssh-server openssh-client

cd /usr/share/applications/screensavers
find . -name '*.desktop' | xargs sed -i 's/OnlyShowIn=GNOME;/OnlyShowIn=GNOME;MATE;/'

wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
sudo sh -c 'echo "deb http://dl.google.com/linux/talkplugin/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
sudo apt-get install google-talkplugin 

  # It seems to cause problems with apt-get update
rm -f /etc/apt/sources.list.d/google.list
   
   # This might make the computer seem to stall, but just wait. 
apt-get -y install xfonts-*

apt-get update
apt-get -y upgrade

Saturday, August 8, 2015

8-8-2015 Install Mongo 3.0 from start to finish (status : just started )

The purpose of this task is to install Mongo as a complicated sharding cluster or single server (with replicas). I did Mongo 2 for over a year, and I am getting certified in Mongo 3.0 for fun, and I like the direction Mongo is going, so I think it will be here for a while.

A big issue is the initial installation, the setup, and then graphing later. I need to install graphite at least before I am done with this blog entry.


  • Install DAD modules, develop them. 
  • Integrate with Salt
    • Make a module n the client side which
      • Checks if there is already a mongo installed, if there is enough diskspace, validates we can install. 
      • Install mongos, mongo config, or mongo data. Or standalone is replica set, or standalone. 
        • We need a config. 
      • The module will probably use existing salt modules. 
      • Return the data as yaml or multi-level dictionaries. 
    • DAD can detect if this server can execute stuff on other server via salt. 
      • Or via sudo. 
      • Detect is client side module is installed. 
  • Once integrated with SALT or sudo, 
    • Choose naming convention for mongo installations because we might have more than one mongo on it. Allow crazy configurations (which can be useful or test or QA boxes). 
    • Connect and execute detection scripts on all targeted hosts. 
    • Then execute the install scripts. 
    • Execute test scripts. 
    • Run as a background job and report status.
  • Make a DAD interface for monitoring mongo in its basic form. 
    • Include basic graphite stuff. 

Setting up dad. The initial setup is crude. I will need to make a package that installs this. I just

 ln -s /usr/lib/python2.7/dad  /SOME/DIR/work/git/dad

instead of making a package. Then a simple dad is just "import dad" in a python script. I am going to have to make packages for the target servers.

I am going to use MariaDB, because later I want to use the MariaDB Max stuff. https://downloads.mariadb.org/mariadb/repositories/#mirror=jmu&distro=Ubuntu&distro_release=precise--ubuntu_precise&version=10.0

sudo apt-get install python-software-properties
sudo apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db
sudo add-apt-repository 'deb http://mirror.jmu.edu/pub/mariadb/repo/10.0/ubuntu precise main

sudo apt-get update
sudo apt-get install mariadb-server

# changed in etc/mysql/my.cnf the homedir to /data/mysql_00
mkdir -p /data/mysql_00
chown -R mysql /data/mysql_00
mv /var/lib/mysql/mysql /data/mysql_00/mysql 

service mysql stop
service mysql start


TODO: my.cnf settings, connection pooling, buffers, connections, etc.
Integratig with SALt.....

Friday, August 7, 2015

8-7-2015 Setting up your own staging to prod environment at home

Well, not exactly at home. I want prod to be independent of staging and professional quality. This document will be edited as time goes on.

Goal is to develop at home and push to prod and it should still work.
Perhaps I should have a staging environment? No, my stuff is not that important. Nobody is going to die. For money making website, yes. Its the same but just another step on the chain. If you don't, and you make money, you're CEO is a bit weird to say the least. I won't work there or demand it gets changed. Believe or not, many companies don't.

2nd goal is to show people that you can get stuff to work. It was so frustrating to deal with outdated versions of software that required 10 times more work or just didn't work. In the past, this was used by a jerk to corner me to make himself look better. I remember a co worker saying once that guy is past his 3 month probation, things will change. Yes, it did lol. Total liar and I hired the guy. Never trust anyone until after their probation, and even then, you never know. Protect yourself by getting ahead of the trends.


This stuff is done:

  • Setup domain name record with free service. Since it is free I might as well give them some kudos. 
  • Setup Google's applications for menprojects.com to handle email, and other stuff. 
    • Had to point DNS records to google. Did it months ago. 
  • Use blogger.com for blogs. 
  • Buy 3 laptops at home for creating 3 server clusters for mysql, mongo, etc. I got some crappy ones for less than 300 brand new.  4 gig ram, 500 gig diskspace, hmdi, and standard stuff. 
  • Buy TV with hdmi for laptop monitor. Don't bother with actual computer monitors. This TV is for computer use only. 
  • Setup 7 AWS servers. one main and two sets of 3 clusters. Setup permissions, keys, etc. 
    • A micro computer for 3 years was over $100. Sold. Not much diskspace or power, but that's okay.   
  • Setup DNS and home and DNS at AWS for inside use only.
  • Setup GIT on local and AWS. 
  • Install Percona Cluster (not mysql cluster) it uses galera.
At the time of the last edit, this stuff is to do first at home then on AWS: 
  • Automate the script that creates the package for DAD. And other tools. 
  • Install SALT and get installations scripts for all software. 
  • Add graphing and monitoring scripts and reports. 
    • Integrate with SALT. 
  • Mongo 3.0 certification. 
  • Install and automate VoltB and Vertica. 
  • Install MariaDB with Maxscale and make it automated.  
  • Program DAD for VoltDB, Vertica, and MariaDB with or without MaxScale.
  • Install Redis and VoltDB as cache layers for apache. Different scripts can point to either I guess. 
  • R -- a replacement for SAS and built into Vertica. 

Now for the technologies:
  • SALT : Because I a python guy and I hate writing modules in the other ones. I have been stuck waiting on people to write modules --- one who actually worked against me. I want to be able to write my modules fast. 
  • GIT: You might say mercurial, but there is no python I need written for source control and GIT has a bigger following than mercurial will ever have. Mercurial comes in 2nd for me. 
  • Vertica, VoltDB, and MariaDB over Hadoop and Cassandra. I know MariaDB isn't a data warehouse, unless you want it to be. Hadoop and Cassandra have the market place but I am looking 5 years down the road. Both Hadoop and Cassandra are just not philosophical there with me. 
  • Redis and VoltDB as cache layer. Both are good. 
  • MariaDB and MaxScale : Percona is nice. But MariaDB is more in line with my philosophy. MaxScale takes away some of the reason to use NoSQL. 
  • Percona or MariaDB Cluster: Galera is awesome and I like it a lot. I will switch from Percona to MariaDB cluster. But kudos to Percona for pushing it. I hope Percona takes the MariaDB Maxscale and the two companies continue to push each other and be friends.  
  • Apache has my web service choice, but others are good. This is not written in stone for me. I have always  been an apache guy. 
  • LinuxMint or the Ubuntu knockoff. RedHat is all corporate and I disagree with the fundamentals of their company. Ubuntu is my 2nd choice but I hate Unity and I hate for them jumping on the bandwagon to turn your computer into a phone. Buttheads. Sadly for AWS its ubuntu, but I use LinuxMint at home. 
  • Python and Bash. Don't need to say anything. If I do, you're not my audience. 
  • Graphite and Graphana. The one good thing that came out of backstabbers and foolish people I had to work with (2 bad ones out of many that were great people). Yes, those technologies for graphing are great but they never understood its full power. It is hard to deal with people who can't see the forest for the trees and are tyrants. 
  • Nagios. I am so used to it. 
  • R -- a replacement for SAS. I have told it is built into Vertica nicely and is pythonish. Makes it a no brainer. 
I'll add more as I go on. 

Sunday, August 2, 2015

8-2-2015 GIT (status: done)

After taking 2 months off for personal reasons, back in the game.

This is going to focus on GIT.

  1. Develop a home git environment. 
    1. initialize project
    2. Clone the project locally
    3. Make available for remote acess
    4. Do a test run remotely. 
  2. Replicate it to AWS via SSH, 
  3. Integrate with SALT somehow.


This is going to be an ongoing doc. Just started it.
Links:




Wednesday, June 17, 2015

6-17-2015, SALT as in saltstack (status : home done, AWS todo)

SALT is written in Python. Puppet is written in something I don't program in anymore. The choice is clear when its my choice. I have seen people use Puppet even though someone was SALT certified because puppet has been around longer and the more experienced people got their way. But puppet isn't bad at all. I just prefer SALT. I could care less really. I plan on writing modules, and its easier if its in a language I know.

I will be adding to this doc as time goes on, now that AWS, my home laptops, DNS, MySQL (rep and cluster), and mongo are setup. I have to redo some of this in SALT however.

WARNING: you are opening up several ports for salt on servers. Make sure they are firewalled and those ports are not open to the outside. Make sure in AWS you don't let anybody use those ports and block off access to those ports for people not in your local are network.

Links:


  • http://docs.saltstack.com/en/latest/
  • youtube: https://www.youtube.com/user/SaltStack
  • User group in bay area: http://www.meetup.com/Salt-Stack-DevOps/


  • Steps
    • Add salt.mylocaldomain to the DNS
      • Add "salt IN CNAME ns2" to /etc/bind/db.mylocaldomain
      • Restart DNS: service bind9 restart
    • The Master (which will also be a minion). We do everything to the master, and then one section for each other service. 
      1. sudo add-apt-repository ppa:saltstack/salt
        sudo apt-get update
        sudo apt-get install salt-master salt-minion salt-syndic
        
      2. sed -i 's/\#master: salt/master: salt.mylocaldomain/' /etc/salt/minion
          # It asks a question I can't seem to get rid of here. Good in a way. 
        
        service salt-master restart
        service salt-minion restart
        service salt-syndic restart
      3. Now setup the minion part of this server -- use the master as its own minion.
        1. When minion is restarted above it will attempt to make its own key and connect to the master.
        2. salt-call key.finger --local @ Should look like
        3. local:
        4.    28:01:06:52:9e:e6:97:7d:cc:00:2d:7f:fa:16:93:a3
        5. Now accept the key on the master:
          • List the key status: salt-key -L
          • Accept the key either as
            • salt-key -A
            • or, salt-key -a NAME
    Now add the other minions, make sure "salt" is added to the DNS before you do this:
    • Install it:
      • sudo add-apt-repository ppa:saltstack/salt
          # It asks a question I can't seem to get rid of here. Good in a way. 
        
        sudo apt-get -y update
        sudo apt-get -y install salt-minion
        sed -i 's/\#master: salt/master: salt.mylocaldomain/' /etc/salt/minion
        service salt-minion restart
      • Do an "nslookup salt.mylocaldomain" and if it comes up correct. 
        • If so, when you restarted salt minion it should have submitted a key to the salt master. 
      • On the master: salt-key -A. 


    The first commands (some of them are in the docs). Do this on the master.

    • salt '*' test.ping
      • On the hosts should return "True". If not, the DNS might be wrong, you may not have installed sal-minion on the target, or its config file doesn't point to the master, or the master had not accepted its key
    • salt '*' disk.usage
      • salt '*' disk.usage --out raw
      • salt '*' disk.usage --out yaml

    Now let's put the command in a script. Parsing console output is stupid when its already parsed in the language.

    Make this script and save it as "salt.py" :
    import salt.client
    
    local = salt.client.LocalClient()
    
    result = local.cmd('*', 'test.ping', timeout=2)
    print result
    
    result = local.cmd('*', 'disk.usage', timeout=2)
    print result 
    

    And then execute the script as "python salt1.py". It shows the output (I think in json format) and you can modify the script to deal with the parsed data instead of reparsing it from a cli. I will have examples in scripts for my DAD.


    Now let's parse the output of the "result" output. For this module, it seems be a multi hash. Let's just get the capacity of each partitions.




    import salt.client
    
    local = salt.client.LocalClient()
    # I chose this one since I think it might be multi process, not sure.                                                                                                 
    # I need to actually test it out.                                                                                                                                     
    inter1 = local.cmd_iter('*', 'disk.usage', timeout=2)
    
    for i in inter1:
      for host in i:
    #      print i[host]                                                                                                                                                  
          retcode = i[host]['retcode']
          if retcode != 0:
              print "ERROR:", host, retcode
          else:
              ret = i[host]['ret']
          for partition in ret:
     capacity = ret[partition]['capacity']
     print host, partition, capacity
    
    # which hosts failed. There msust be a better way where above it will return the ones failed.                                                                         
    result = local.cmd('*', 'test.ping', timeout=2)
    print result
    

    Now I want to see if this is multi-process (or threaded or whatever)
    import salt.client
    
    local = salt.client.LocalClient()
    inter1 = local.cmd_iter('*', 'cmd.run', ['echo "test" > /tmp/START; sleep 120; echo "test" > /tmp/STOP'], timeout=2)
    
    for i in inter1:
      for host in i:
    #      print i[host]                                                                                                                 
          retcode = i[host]['retcode']
          if retcode != 0:
              print "ERROR:", host, retcode
          else:
              ret = i[host]['ret']
          for partition in ret:
            capacity = ret[partition]['capacity']
            print host, partition, capacity
    

    And looking at the output of the processes on the master server.....
    mark2 ~ # ps auwx | grep minion
    root      4544  0.0  0.5 522460 20036 ?        S    01:14   0:00 /usr/bin/python /usr/bin/salt-minion
    root      4725  0.0  0.0  11744   912 pts/2    S+   01:15   0:00 grep --colour=auto minion
    root      7815  0.0  0.2  73712 10196 ?        Ss   Jun17   0:00 /usr/bin/python /usr/bin/salt-minion
    root      7819  0.0  0.5 522460 20296 ?        Sl   Jun17   7:24 /usr/bin/python /usr/bin/salt-minion
    

    Now looking at the timestamps of the 3 servers.....

    mark ~ # ls -al /tmp/S* 
    -rw-r--r-- 1 root root 5 Jun 25 01:14 /tmp/START
    -rw-r--r-- 1 root root 5 Jun 25 01:16 /tmp/STOP
    
    mark2 ~ # ls -al /tmp/S*
    -rw-r--r-- 1 root root 5 Jun 25 01:14 /tmp/START
    -rw-r--r-- 1 root root 5 Jun 25 01:16 /tmp/STOP
    
    mark3 ~ # ls -al /tmp/S*
    -rw-r--r-- 1 root root 5 Jun 25 01:14 /tmp/START
    -rw-r--r-- 1 root root 5 Jun 25 01:16 /tmp/STOP
    

    We see that they all have the same timestamps, and there were 3 processes running, so that command is verified to be what we want. Imagine if we have 1000 computers. Doing this serially would suck.

    Now let's organize stuff ---- doing.


    Thursday, June 11, 2015

    6-11-2015, DNS continued (status: done)


    • Setup DNS servers on all 3 servers at home, each primary masters, since they won't change, its okay to just copy them all and make them independent and not make them secondary DNS servers.
    • Setup AWS DNS server for local domain. 
      • apt-get install bind9
      • Setup resolv.conf : Not sure which did, but restarting the server made resolv.conf right. 
        • Added "nameserver 127.0.0.1" to the file /etc/resolvconf/resolv.conf.d/head.
        • /etc/resolvconf/resolv.conf.d/head:search mylocaldomain us-east-2.compute.internal
        • /etc/resolvconf/resolv.conf.d/base:search    mylocaldomain us-east-2.compute.internal
      • Added the forwarder to the amazon DNS. 
        • To the file /etc/bind/named.conf.options
          • forwarders {     172.32.0.2;           };
        • Add add this but you should add more restrictions. Since l let AWS restrict by network  put this in. Otherwise, none of your servers will be able to query it. 
          • allow-query { any; };
      • Follow the steps in the previous doc from "Now setup the DNS for your own network."
        • Being an idiot, I didn't figure out the command to restart the netowork, so just reboot it. 
        • Get the ip addresses from the EC2 console, or the scripts. 
        • Finally, to let the other servers use your DNS server:
          • Open up the DNS port only to those servers you trust. 
          • Add your DNS server /etc/resolvconf/resolv.conf.d/head  on those servers. 
            • TODO --- this needs to be done, last step. 
              • At home, each is a master. DONE
            • AWS
    For AWS servers......
    echo "nameserver 1.1.1.1" >> /etc/resolvconf/resolv.conf.d/head
    echo "search mylocaldomain us-west-2.compute.internal" >> /etc/resolvconf/resolv.conf.d/head
    echo "search mylocaldomain us-west-2.compute.internal" >> /etc/resolvconf/resolv.conf.d/base
    reboot # I must find a better way.
    After reboot, test DNS entries.
    


    DONE at home and work.
    I will need to maintain this, but I am going to leave it out of SALT. Now to SALT, then voltdb, vertica,  DAD. 

    Tuesday, June 9, 2015

    6-9-2015 BIND, DNS (status : done)

    I need to install bind at home and AWS. Home first because my computers was way more powerful than then micro computers on AWS. Need two things: one as a cache server and one as a master. Then the other servers in the network will cache only from the master. Eventually, set this up in salt. Just want it work first, and then do stuff in salt after that. Salt shouldn't maintain the the main DNS, except maybe parts of it.

    Links:


    • https://help.ubuntu.com/12.04/serverguide/dns-configuration.html
    • https://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-bind-zone.html
    Steps:
    • apt-get install bind9
    • route -n # This gives you the ip address of your router at home with is also the DNS. For AWS I will have to use the AWS servers. 
    • In the forwards section of       /etc/bind/named.conf.options , I added my router at home and google's DNS for fun.
      forwarders {
           8.8.8.8;    
           192.168.1.1;
                 };
      
      
    • Test that it works locally: nslookup google.com 192.168.1.209 
      • restart bind: service network-manager restart
      • Change the ip address to your local computer.
    • Change the resolv.conf to point to your own computer at the file: /etc/resolvconf/resolv.conf.d/head
      • search mylocaldomain
        nameserver 127.0.0.1
        nameserver 8.8.8.8
        # blank space
        
    • Restart network: sudo service network-manager restart
      • /etc/resolv.conf should have your changes. Check it. 
    • Now setup the DNS for your own network. 
      • Edit the file /etc/bind/named.conf.local and add:
        • zone "mylocaldomain" {
           type master;
                  file "/etc/bind/db.mylocaldomain";
          };
          
      • Edit the file and put in your own hosts at /etc/bind/db.mylocaldomain. This worked when I tested it. I am sure technically I could make the below better. 
        • ;
          
          ;
          ; BIND data file for example.com
          ;
          $TTL    604800
          @       IN      SOA     ns1.mylocaldomain. root.mylocaldomain. (
                                        2         ; Serial
                                   604800         ; Refresh
                                    86400         ; Retry
                                  2419200         ; Expire
                                   604800 )       ; Negative Cache TTL
          @       IN      NS      ns1.mylocaldomain.
          @       IN      NS      ns2.mylocaldomain.
          @       IN      NS      ns3.mylocaldomain.
          
          @       IN      A       192.168.1.158
          @       IN      AAAA    ::1
          
          ns1     IN      A       192.168.1.158
          ns2      IN      A       192.168.1.209
          ns3      IN      A       192.168.1.50
          
          
          mark  IN CNAME ns1
          mark2 IN CNAME ns2
          mark3 IN CNAME ns3
          
          salt IN CNAME ns2
          
          
          ns3   IN A 192.168.1.30
          mark4 IN CNAME ns3
          ns4   IN A 192.168.1.179
          mark5 IN CNAME ns4
          
          
          
          
        • Restart bind: service network-manager restart
        • Do an "nslookup mark", and mark2, and mark3 to see if they come up. nslookup other domains like google.com to see if them come up. 

    Wednesday, June 3, 2015

    6-3-2015 Immediate list

    1. Finish taking the two mongodb classes is possible. This is in preparation for mongodb certifcation. -- ABANDONED. to much to do. taking exam later anyways.
    2. DONE install DNS at home. This is for setting up easy hostnames.
    • Fix the ip address on the wireless. --- DONE. 
    • Setup primary and document. --- DONE. 
    • Setup DNS with secondaries on all the servers. 
    • Use the google dns, comcast dns, and set these up as cache with forwarding. 
      • The big advantage is once it is cached, further lookups happen locally.  
    3. Setup salt at AWS. Possibly let the local servers also connect to it, or setup salt server at me too.

    • DNS was setup in AWS to make it easy. 
    • TODO: setup salt in AWS. 

     4. Make SALT do the entire configuration and setup and initialize.
    • A topologu must exist for all of these. 
    • Detect if exist:
      • No
        • Initialize
        • If part of a group try to connect to group. 
        • If it has to be primary (MySQL rep) see if other servers are running. If they are, spit out error messages.
      •  Yes
        • See we can start and reconnect. 
          • If you can't error out. 
    5. Install these by SALT:
    • Mongodb 3.0
    • Percona - rep
    • Percona -- Galera
    • MaxScale from MariaDB. Yes, I should be installing MariaDB instead of Percona but a lot of companies are hooked on Percona. It sucks. 
    • voltDB
    • Vertica
    6. Add voltdb and vertica to DAD as well as MySQL and MongoDB. I need MySQL Rep and MySQL Cluster as separate sections. 

    Wednesday, April 22, 2015

    4-22-2015: misc garbage over april

    Just decided to combine all the useless posts 4-2-15 2 redis classes online 2 centos AWS servers -- should get a third. Need to convert install scripts. Setup fail2ban signed up for hadoop course thing. There are a few of them. Might as well get it over with. headset/mic in phone now I can call from computer, much easier. I have a minipad I can still do conferencing with. started python project on ramsey and DAD Looking at method to separate read/write with redis and to use redis with mongo since mysql just takes up too much ram. Maybe at home I can do it, but I need to setup my third laptop, but it has windows on it which is required for mongo certification --- remote exam and stuff. That sucks. ------------------------------- 4-6-2015 busy busy busy Taking pivotal course on redis. Finished 3rd class on redit from udemy over the weekend. Very very good. I highly recommend "Redis" from udemy. The others are useless. Hopefully I can do more programming for ramsey, DAD, and get redis into DAD as well (basically a one second ttl when getting data in case everybody tries to beat up a server to get data). 4-7-2015 Finished pivotal redis course, got certficate, granted its online. Such a bum for not getting more done. 4-9-2015 Hadoop is understandable, but a big project from the Mapr classes. They will take a long time to do. I won't want it to be endless -- but its good. Need some practice first. Will take some easy free classes at udemy for an hour a day while I work on other stuff. 4-13-2015 1. Singed up for puppet certification for end of month. After dealing with OPS and a person who could never seem to help because he was too distracted and only into himself, I don't want to rely on someone. I want to know if it is so easy that it can be done, and why not. 2. Finished personal crap, but more to do. 3. Need to finish that stupid hadoop course --- with mongodb and puppet certification exams, hadoop will just be casually studied with easy online sources. Exam will have to wait. 4. Still waiting on getting DAD setup with redis (on myself). Its funny how some people are so into themselves they learn technologies so that they have job security but they are not really that great at it, but since they are the only person you can rely on, they get away with murder. 4-14-2015 Starting puppet training course. https://www.udemy.com/learning-puppet/?dtcode=NBTtnMS2K1qq Follow this first: https://www.howtoforge.com/puppet-ubuntu-14.04 which then I have an exam to take a Person Vue for this, and I don't care if I fail the first time. You know what they are asking and can study for it. Often a whole book or course contains lots of useless information and you don't know what is important. I don't recommend "Basic overview of Big Data Hadoop" as it was hard to follow. Giving Hadoop a little break because I was to get certified in puppet, Mongodb, and PostgreSQL first. To me, OPS people take control of your life by running Nagios, Thruk, Graphite, Grafana, and puppet and you can't get anything done because they are too busy. In addition, some people take it on themselves to be experts in those areas and they suck but nobody knows they suck because their boss doesn't know it and he is an idiot. No this isn't about any friend I have (not you who is an expert at quality control). Its about an overworked idiot who specialized in stuff his boss (who couldn't even program) knew nothing about and acted like he was special. The best way to defend against selfish political people is to be able to do what they do so you can show them up. Their lies and stupidity come to a halt. I had a guy who always had to win. I agreed with him 50% of the time, he never agreed with me. He was good most of the time, but sometimes he was incredibly stupid. His boss wasn't any better. Good most of the time, but sometimes just short sighted and dumb. You have to be independent to show people reality sometimes. Its good not to be arrogant or to show you are really better, but plant the idea in their heads and let them take all the credit. They are politicians or demented anyways. 4-16-2015 Puppet uses Ruby for modules. Didn't know that. That explains why our guy made it such a huge pain in the butt to add anything needed for MySQL. He should have told us --- projects were unnecessarily delayed if he would have explained the situation so get more resources. That. Still, his fault. His fault for not telling us that so he could keep control because then I would have helped --- maybe --- I converted from Perl to Python and never looked back, and Ruby is too much like Perl. Will still get Puppet certification, but for my computers I will have them running salt. So in order of certification: Puppet, MongoDB, PostgreSQL, Hadoop. Maybe Cassandra afterwards. During this time, using salt, DAD, graphite grafana, etc. 4-17-2015 I am beginning to realize how immature and crude our setup potentially was with puppet. My theory about centralized configuration might have been correct and it should have been done different. This puppet course is very good, clear, and east to understand. The trick is I watch a little everyday to it doesn't overload the brain. I am going to go with salt anyways, after getting certified by puppet, hopefully if not the end of this month, the end of next month. But, I will leave 3 mysql servers in puppet, and 3 hadoop/other systems in salt for my setup --- just so I am forced to deal with them and keep up to date. 4-20-2015 Living in the dark. Its amazing how much you can learn from getting certified. The certification is not the important part, the learning of various subjects for fun is. Get certified in stuff you like cause its easy to get side tracked. DAD + Mongo (with MMS, ops manager, and salt) and also MaxDB, all with graphite and grafana, and predictive resource growth, is the way to go. I am very impressed with MongoDB 3.0 and the direction its going. Its do funny, but it is so easy to get lost in a fast paced company, you can't see the forest for the trees. Especially with a narrow-minded, unable to program, political, crocodile smiling ex-boss who makes weird ass decisions. 4-22-2015 Immediate:
    1. Mongodb 3.0 with ops manager --- use ops manager to create mongodb services.
    2. Redis + Mongodb --- just setup.
    3. MongoDB certification -- take test next week (study now)
    4. Test mnongodb failures from 2.2. Adding server, removing, etc.
    5. Test document level transactions. Create a way to undo stuff in a trasnsaction if ti fails.
    6. Test a 3rd mongodb server that is all in ram. Compare queries to it and 2nd server. Maybe 4 servers: Master Secondary, Secondary arbiter, add a 4th all memory server that cannot become master (hidden I guess)? Write up an article on this.
    7. Test push button upgrade in OPS that is doesn't disconnect or ruin trasnsactions. 

    other: MAXdb in front of mariadb galera. Remove percona or just use other 3 servers.

    The highest is to get one certification out of the way, mongodb 3.0. Need to get certifications out of the way so I can work on DAD, grafana, etc.

    6-3-2015 Didn't know google has dns servers you can use. The ones for my provider at home seems to have slow DNS. Might use it. https://developers.google.com/speed/public-dns/docs/using

    Wednesday, April 1, 2015

    Free online classes and cheap exams: for stuff I like

    I will be updating this as time goes on. Many classes or exams are not worth it. I am filtering it down to the technologies I am familiar with and I like the class or exam. To be honest there are TONS of youtube videos. Who knows if they are tons. And the best way to learn the guts of the technology is to watch the online videos from various conferences, do what they say, watch again, repeat. Then take exams. There is no point paying $10,000 for certification when you can study, take the exam, fail it, study again, pass it when the exams are generally about $200 a piece (sometimes multiple exams are required). This is not a problem if you REALLY want to take the time to learn it.

    Relatively cheap exams.
    • MySQL Certification
      • From mysql, they aren't bad. https://www.mysql.com/certification/
      •  http://www.mariadbcertification.org/index-3.html
        • No idea that is going on here. 
    • Python Certification
    • Linux Certification
      • lpi.org
    • Hadoop
      •  http://cloudera.com/content/cloudera/en/training/certification/ccah.html
    • Redis: 
      • NOT CHEAP. You have to take the course too. http://pivotal.io/training 
      • Certificates of completing a class below. 
    • mongo
      • https://university.mongodb.com/exams/C100DBA/about
        • bastards. They don't allow the exam on Linux. I have a laptop which I have to leave Windows for this purpose. Stupid. Oh well. I hope they make it doable on Linux. I had a mac laptop, but not now. Darn. Registered for the april 28th one. Time to study and stuff.
        • TODO: finsh sing up and schedule exam for may 
    •  Puppet 
      • Pearson Vue http://www.pearsonvue.com/puppetlabs/
      • TODO: Take scheduled exam
    • PostregSQL
      • TODO: You can take this cert. 
      • http://www.enterprisedb.com/products-services-training/postgresql-certification

    For free or cheap Classes / Tests
    •  http://bigdatauniversity.com/
      • Hadoop: http://bigdatauniversity.com/bdu-wp/bdu-course/hadoop-fundamentals-i-version-3/
      • There are lots of hadoop training courses here. 
        • Apparently not all are free, I think these are:  http://bigdatauniversity.com/wpcourses/?cat=124
    •  Map R --- hadoop replacement
      • https://www.mapr.com/services/mapr-academy/big-data-hadoop-online-training
    • Hadoop
      • Free tests for 180 days
        •  https://university.cloudera.com/certification/CCAH-practice-tests
      • A lot that are free, and cheap ones
        • https://www.udemy.com/courses/search/?q=hadoop&view=grid 
        • I am just going to list the ones I did or doing , but there are lots. This list will grow as I get done with one I start another.
          • DONE Big Data and Hadoop Essentials (free)
          • DONE Basic overview of Big Data Hadoop 
            • This one was hard to follow.
          •  Hadoop Administration - Hands on ($19 after coupon code from the free class or its $200 -- not worth it for $200).
    • Linux
      • LPI has links to free exams and online tutorials for some of their exams. I recommend everyone get at least LPI level 1. 
      •  
    •  Python --- todo 
    • MongoDB -- from the mongodb website
      •  M101P: MongoDB for Developers
      •  M102: MongoDB for DBAs
      •  M202: MongoDB Advanced Deployment and Operations
      •  UD032: Data Wrangling with MongoDB (Udacity)
        •  https://www.udacity.com/course/
      • There are others, but I am focusing on DBA stuff.  
      •  
    • Postgresql
      •  http://www.enterprisedb.com/store/products/dba-training/01t50000001Nyi7AAC
      • http://www.moderncourse.com/certification.php?cid=6
      • http://www.enterprisedb.com/products-services-training/postgresql-certification


    • Redis
      • Certificate of completion. better than nothing:
        • DONE https://www.udemy.com/learn-redis-step-by-step/ 
        • DONE https://www.udemy.com/learn-redis/ --- best of 3 classes
        • DONE https://www.udemy.com/rapid-redis/
      •  Pivotal --- VERY GOOD
        • DONE https://pivotallms.biglms.com/application-framework?q=5310bc82ecdedfdbd5ac1370

    TODO: Salt, apache, find some.



    Tutorials
    • Hadoop
      • Youtube:  https://www.youtube.com/watch?v=1jMR4cHBwZE&noredirect=1
      • Probably other youtube stuff.
    • MySQL: Conference links HERE. 
    • Python: Conference links HERE.
    • Linux: Conference links HERE. 
    • redis: Conference links HERE. 
    •  

    Initialization and List of things to do

    Original start date: 3-31-2015 - and I will be updating this frequently. When I get finished projects, they will be linked here I hope. 

    I started this page for two reasons:
    1. I want to investigate to do a bunch of technologies including advancing my DAD project. 
    2. Prove to a person with a crocodile smile and to help with with middle management to get results done because middle management sometimes care more about image and perceived results than true results (luckily I have only come across this a few times --- I tend to work well with tech managers who don't have Napoleon complexes or political agendas):
      1. grafana and graphite are much more powerful for automation than manually using grafana. Both are good, but manual is time consuming. 
      2. Percona Cluster on Ubuntu 14.4 worked the first time compared to all the problems on Ubuntu 12.4. 
      3. Installing grafana and graphite on ubuntu 14.4 was easy out of the box. I heard easy ways to install it on 12.4, but the way I did it from the instructions I found was hard.

    Stuff started and done in 3-2015 for AWS project. Some stuff took time because I had to wait on stuff.
    • Setup 3 ubuntu servers in AWS. There was actually a lot of work to get this right. 
      • Reserved in instances. 
      • Setup security. Ports between members in group.
    • nagios and plugins
    • pt-schema and other percona tools
    • Mongo setup with 2 repica sets, config servers and mongos
    • Apache with virtual domains
    • FreeDNS for AWS server hosting apache.
      • freeDNS, use their correct servers. 
    • tripwire and fail2ban
    • setup percona cluster with rsync
      • Not enough memory for innodbackup -- will do at home.  
    • Setup 3 laptops at home with LinuxMint.  
    Stuff to do:
    • Setup 4th laptop so I have 3 working laptops and they should be able to run my install scripts for the AWS project easily. 
    • Start AWS project with install steps and scripts. 
    • Start ramsey project. 
    • Python
      •  Redo the Python to Python standards.
      • unit testing 
    • osticket
    • git
    • 2 centos machines, maybe 3. 
    • DAD
      • MongoDB 3.0 and MMS and ops manager, salt for automation, graphite and grafana
      • use yaml format
      • Make modules semi-indpendent so you can load just parts. 
      • into git
        • get free space. Its gpl2 or gpl3, haven't decided. 
      • backup script
        • Backup binlogs
        • Backup cluster
        • Backup mysql 
          • mysqldump
          • snapshot via lvm
          • inndb backup
          • cluster backup
      • Archive
        • Into MyISAM merge tables. 
        • Into Innodb tables
      • Correct replication checks
      • Slow log checks
      • Slow log analysis feeder and grabber on server side
      • feeding graphite
      • local monitor/collector software on each server
        • Include slow logs
        • Include os status
        • Include ability to throttle queries
        • Include for nagios checks. A summary check and then details if something is wrong.  
        • include any database
      • integration with other monitoring software
        • mysql
        • mongo
      •  Projections (supposed DBA didn't believe in these, lol)
        • cpu
        • memory
        • diskspace
        • load
      • History
      • Slow log reports
      • General log reports
      • server comparison 
        • manual with one more servers
        • Assign a server to a server. 
          • Note allowed differences
        • List which variables to look at. 
      • schema comparison
        • Assign server to a server to compare to in an admin page, by service_id with no drop downs. Print service_id next to name. 
      • Admin page to add servers. 
      • Add Mongo, Redis, and others to main page. 
      • Add ability to:
        • use short names
        • red/yellow to highlight potential problems
        • use replication names
        • ignore querying data from mysql server for diskspace, etc.  
        • admin page to easily add services
        • Group page ( I think its there)
      •  Shard analysis or servers group analysis
      • Problem Page
        • Replication
        • settings
          • Mysql
          • OS
        • slow logs
        • diskspace
        • backups
        • others -- you know what I mean 
      • my.cnf check
      • Add mysql or mongo or other db as datastore. 
        • In theory, database doesn't have to be networked. 
        • Just need to store int, varchar, string, blob, dates (which could be integers).
        • Script to setup dad tables and interface for it. Mylsq, Mongo, sqllite, postgresql, and others should work. 
        • Put redis in front of it. 
    • mySQL
      • Replication
    • MariaDB MaxScale
    • puppet
      • Takinbg exam at end of month. 
      • Integrate it to manage the other servers.  
      • Integrate when changes are made, or when noop is done it sends a report back to a server.  
      • Setup the server so it checks out the commands and executed them. Masterless. Then the master's only duty is the puppet config on the remote host.  
      • Perhaps have a database and store variables per machine so that when scripts run, it doesn't have to rely on the hostname to figure things out. You could have a database and web server in the same setting. The database would hold the information. Perhaps have it hold information which contain lists, etc.  
        • Can you put variables in facter from the database?
    • hadoop
    • cassandra
    • graphite
    • grafana
      • Manual graphs
      • Automatic graphs (maybe graphite only)
    • anemometer replacement since its broken (in a scalability sense) and in Python. I already wrote this years ago in a cruder form. 
      • Data leaks
      • Schema not optimized
      • Written in PHP --- very simple and I already wrote this in 2006 in python, and then again in 2013. But not the pretty interface. That will be a learning experience to write that in python. 
      • use yaml for everything, except config files. 
        • basic config files use the same one for mysql parsing. 
    • rainguage replacement since its broken (in a scalability sense) and in Python and I wrote very similar thing in 2006. 
      • schema not optimized
      • dangerous settings
        • Needs to auto scale back
        • needs default settings to not be dangerous
        • Dangerous settings only turned on when asked
        • Not written in Python. I already wrote this in Python years ago in 2006. 
    • https login with cookie (if needed)
    • MHA
      • MySQL replication
      • Percona Cluster
      • Make scripts for each other them.
    • Look for other MHA replacements
    • Redis
      • DONE 1st class : udemy
      • DONE 2nd class : udemy
      • DONE 3rd class : udemy -- do only the "redis" class
      • DONE: pivotal cert on redis
      • Map queries and schema to redis automatically, if the schema/query satisfy properties. 
        • Validates tables and keeps it recorded somewhere.
        • Semi complex queries to be mapped, which include conditions in the where clause that are not primary ---- if where conditions are at least indexed. 
        • Make independent python module in dad.   
      • Slow queries
      • Get RedisLabs account
        • integrate it with DAD 
      •  hyperloglog, pub/sub, scripting
    • Memacache
      • Compare and run test to compare to redis
    • VoltDB 
    • Mongo script testing and work.
      • Script to test connections when adding/removing node
        • When replica set goes out.  Should be a positive result where it is detected.
        • mongos goes out.  Shoud fail.
        •  This should not fail (some did before 3.0):
          • Adding a node
          • removing a node
          • Removing a replica set (that is drained)
          • Adding a replica set
        • Script should connect to everything in the replica set and do reads and writes. 
      • Setup Mongo with redis for reads and writes. Possible? 
      • Ops Manager, graphite, grfana, and DAD
    • Chroot --- previous howtos would help here.
      • Apache
      • Mongo
      • MySQL -- might be more of a pain than its worth.
      • VoltDB
      • Hadoop
      • Cassandra
      • Redis
      • SSH login  
    • postgresql
      • and monitoring tool
    • sqllite
      • and mnitoring tool
    • Packet sniff
      • to download all mysql commands
      • to download all mongo commands
      • other databases or stuff, apache? 
    • Article on AWS
      • Get servers
      • Setup DNS
      • Point to DNS servers
      • Setup virtual domain
      • Setup blog
      • TODO: Setup ticketing system behind SSL
      • TODO: Setup redis, DAD, mongo, mysql 
    • AWS
      • Buy cheap database RDS for 3 years. MySQL preferred.  
    • Setup Systems at home and AWS
      • Setup salt server on AWS. 
        •  If possible, have it configure  home server as well or setup Salt at home.  
      • Setup DNS at home and work, controlled by salt.
      • Install the following by salt:
        • All services must installed in the following way:
          • Configuration for topology defined in SALT somehow. 
          • Detect if already running. 
          • If running:
            • Try to reconnect. 
            • Spit out error messages if couldn't connect. 
          • If not running, detect can initialize
            •  If we can intitialize, do so. 
              • Connect to group, or make it the primary if none of the others are up. 
              • If we made primary, report it somewhere. 
            • Or spit out error.
        • Mongo 3.0
        • VoltDB
        • Vertica
        • Redis
        • Percona -- replication on default port. 
          • This can be tricky if automated. 
        • Percona -- Galera Cluster on custom ports. 
        •