Setting up ssh reverse tunnels with reversi

# reversi-server and reversiclient
# Author: kazimof at zzero dot org

# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# http://www.gnu.org/licenses/.

repo now moved to git.
https://github.com/kazimof/reversi/

WHAT DOES REVERSI DO?
———————
Uses reverse ssh tunnels to expose device ports on a reversi-client’s network to a reversi-server anywhere on the internet.

A reverse ssh tunnel (ssh -R) can be established quite easily, the problem comes with connection stability, where the ssh connection can and does hang from either the server or client end. Reversi tests the validity of the connection from the client every X seconds and re-establishes it if the connection is not fit for purpose.

EXAMPLE USE 1:
—————–
You need to manage a LAN but you have no access to their office broadband router to configure port forwarding.

EXAMPLE USE 2:
—————–
You have OpenWRT devices installed within a public mesh network and you need to be able to manage them.

EXAMPLE USE 3:
—————–
A company router has died or their broadband has been cut or they have moved premises, so they are on a temporary 3/4G connection waiting for a new broadband router with a static IP. Obviously the server not getting mail since port 25 is no longer forwarded by router, and webmail is unavailable to user for the same reason. Using an openwrt inside the network I forward ports 25 and 443 of the mail server to internet allowing the company mail server to receive mail and service web mail requests.

HOW DOES IT WORK?
—————–
reversi-clients are equipped with a password-free ssh key to a user on the reversi-server. Using this key the reversi-client is allowed to do 2 things:
1) run a script that forwards any local port on it’s network the reversi-server
2) run a script on the reversi-server passing arguements that inform the reversi-server about which node they are, which port it is using and what kind of response to expect from that port

On the client side the scripts are both run in gnu-screen, one does the reverse connection and the other acts as a connections monitor, killing and re-establishing the reverse tunnel according to the server response from the script referenced in 2) above.

On the server side the script referenced in 2) above tests to see whether the port is responding with the expected response, if not it assumes the connection has hung, identifies the appropriate PID and kills it. Script then returns a number specifying a time which the client monitor script should wait before retyring the connection.

DESIGN PARAMETERS
—————–
Needs to run on a variety of client OS, I am managing a mix of hundreds of openwrt devices, linux servers and OSX installs, I needed this solution for them all and I am too lazy to maintain multiple forks.
The 2 programs available to all platforms are:

screen
ssh

DESIGN
——
Reversi commprises of 2 components to be installed on the reversi server.

SERVER: hosts a script that clients are authorised to invoke to monitor and manage their reverse tunnel
RECOMMENDED INSTALL FOLDER: /home/reversi/reversi
CLIENT BUILDER: builds the scripts to be run on the client device
RECOMMENDED INSTALL FOLDER: /root/reversi/client-builder

See INSTRUCTIONS.TXT for installation info.

Under the hood of Microsoft Office365

HELP! > Under the hood: Office365

Since a limited version of Office365 is available (almost) free via TT-Exchange) to UK charities, some are opting to switch their email and data systems to Office / Sharepoint 365. There are some important limitations to note for those who wish to make this switch.

Big Change

Sharepoint does not work in the same way as SMB (standard windows file sharing) which is probably what your users are used to. Like any software change it can take some time for users to adapt to using it efficiently. For Windows machines the change can be simplified using One drive for business app which is used to sync the online Sharepoint libraries with a local machine. This is useful in that it allows users to work fast and locally using an installed MS office package, and also in the case of a bad internet connection because changes will automatically be synced with the online Sharepoint libraries once connection is restored. Unfortunately this feature is not available on Macintosh devices, Mac users will need to use Office Online.

Privacy

Data may be stored outside of UK jurisdictions. Some organisations are legally obliged to ensure their data is not stored outside the UK so you need to check your policies. Unlike data stored in the UK, Office 365 data would be automatically subject to a Subpoena from a US Court, about which the owner of the data will be informed. However if the agency concerned uses FISA or an NSL to force data disclosure under 18 U.S.C. section 2709(c)(1) Microsoft (or any US based provider) would be prohibited from even informing the data owner that any disclosure has occurred.

Data retention

Office 365 data retention policies may not comply with your organisation’s policies and legal obligations under EU data protection legislation. Of specific relevance to E1 version of Office 365 (offered to UK charities FOC) are the following points:

  • Office 365 will only keep deleted emails or mailboxes up to maximum of 30 days. Some organisations are required by law to keep copies of deleted data for longer periods.
  • If an entire mailbox is deleted the maximum retrieval time is 30 days. Deleted files/folders can only be retrieved from the Second Stage Recycle Bin (administrator accessible) a maximum of 90 days after they were first deleted.

    You can supplement the service in order to remain compliant – there are additional MS products available for data retention compliance such as Litigation Hold, but for E1 users it attracts significant additional per-user/per-month costs.

    Migrating to Office / Sharepoint 365

    E1 software Licensing does not include Desktop Software The E1 product does not include machine licenses, so unless users will either need to purchase the software separately or use the web-based versions. Some of our users however have reported that Office Online (the version that run in a web browser) is lacking in performance and unproductive for them to work with on a daily basis. Although Application performance will be influenced by their available bandwidth usability will also be subject to variations in the load on hosting Microsoft Servers. This load can be high at times.

    Last but not least, Windows Desktops are the target OS for Sharepoint users, so users of Mac or Linux will find there are some extra steps involved to using Sharepoint.

    Need help?

    COMM-TECH offer migration and support packages and budget solutions to many of these limitations including the lack of backup and retention.

  • The pros and cons of cloud hosting

    HELP! > Decisions: Is Cloud Hosting for me?

    “We’re moving to the cloud!”


    The marketing gurus who invented the term “Cloud” have done a really good job. Because it sounds like water vapour it must be cheaper. And of course – everybody’s doing it.

    Typically this option is considered when a company finds that their reliable but ageing office server needs upgrade or replacement. Since this upgrade can represent a surprisingly costly investment, you may be considering migration to cloud as an option.

    Roughly the term translates simply as relocating the contents of the company server to a similar device in a Data Centre. Once the system is in a DC you can do some clever things with it such as federate / outsource various services to specialist providers to upgrade processing power, speed of access, storage space, even back up data to another country to ensure a copy of the data will survive local disaster.


    Myth #1: “It’s cheaper!”


    No way. Market forces regulate the cost of renting Data Centre Servers and space, consequently for most companies those costs are similar to keeping a local server. You’ll find that the cheap or free products have limitations and those you find you need are the extra’s you have to pay for, regularly and relentlessly. You will find that instead of making capital investment in server equipment and having the flexibility to extend the life of that server according to your financial needs, you are stuck with a fixed cost that will equal the investment route.

    Registered charities can be an exception in the form of currently low-cost access to services such as Google Apps or Office 365. There are limitations to this service which may mean unexpected costs though.


    Myth #2: “We won’t need IT support anymore.”


    No. While certainly the major providers have great documentation about how to fix issues with accessing their servers, good luck actually speaking to someone. The management portals are by their nature complex and mistakes can result in data loss. And what is a machine wont turn on today? Keep your IT support. They may be willing to put you on a slightly lower rate because they will have been relieved of some of the server management roles, so negotiate that.


    Myth #3: “I can just move everything to the cloud…”


    No way. Not all locally hosted services can be moved to Cloud for instance: Many types of work-flow collaboration suites such as ACT!, Raisers Edge, Access Databases, Sage Accounting, Access Dimensions. Some of these providers do have SAAS (software as a service) options available – albeit costly. Also, some limitations extend to federated services meaning an organisation will need to adapt it’s culture and working methods to suit the provider – for example Google Apps will not allow sharing of mailboxes, or Office365 sharepoint file server cannot operate fully with files that are not in a Microsoft format.


    Keeping It Local


    Advantages


    • SSO (Single Sign On) allowing users company credentials and service permissions to be managed from one place. Domain based SSO managed by local server.

    • Management and control of domain workstations from server.

    • Uniform and automatic user-login to drives such as drive S: N: etc. Allocation of permissions to use devices such as printers.

    • Automated deployment of shared services and permissions via GPO (Windows) or Login scripts (Linux).

    • Workstation Antivirus deployed and managed from Server.

    • Remote Web Workplace (using office machine remotely) so that software does not need to reside on remote workers machine.

    • A local server is not exposed directly top the internet and because of it’s location less likely to be targeted with this kind of attack.

    • Assured connectivity to Company Data in the Office – even if the broadband goes down.

    • Office server hardware can be accessed by local engineer hands, issues such as refusal to boot and blue screens can usually be dealt with without recovering from a backup.

    Disadvantages


    • A major investment is required when hardware capacity is outpaced by Operating System requirements (this is especially true for windows servers). For domain based services this also costs a significant amount of time on-site for engineers.

    • Premises dependency, vulnerability to disaster. Premises relocation or temporary exclusion as a result of crime, fire or other disaster may result in significant downtime and inability for users to work. Recovering from offsite backup is labour-expensive.

    • A server running 24/7 probably costs in the region of £20 per month in electricity.

    Going Cloud


    Advantages


    • Very high availability – 99.6% server uptime, multiple users have equal access to resources from any location with Internet access.

    • Data Centre servers operate in highly controlled environments and are typically high-end industrial specification and highly scalable meaning that platform changes whilst not trivial are far less disruptive. Server resources can be outsourced to maintain capacity according to need. For instance email or backup services may be easily sourced to 3rd parties to keep up with organisational capacity requirements.

    • Independent of premises – ideal for remote working. Provides users with the ability to work remotely which can help avoid productivity problems related to loss of access to premises such as disaster, relocation, transport issues or space/desk availability.

    • The remote server is silent and the electricity costs are included in the hosting plan.

      Disadvantages


    • Multiple credentials required, for workstations, server and possibly additional credentials outsourced services such as email. With SSO not available, workstations will need to be individually configured per user.

    • Workstation anti-virus and software update compliance, can not be controlled.

    • Limited access to files only – no services such as remote web workplace.

    • If the Office loses Internet access there is NO ACCESS to company resources from the office. You will need to have 4G internet uplink fail-over connectivity to cater for this type of event. Since ISP connectivity problems are usually limited to minutes, switching connection may take that long at least, users are absolutely prevented from working every time there is a disconnect, this can be disruptive.

    • Vulnerability to Zero-day hacker attacks. 0Day refers to security vulnerabilities attacks not yet revealed to the public or patched by the OS maintainer (such as Microsoft, Ubuntu, CentOS, ClearOS) See: https://en.wikipedia.org/wiki/Zero-day_(computing). Servers on high bandwidth connections in Data Centres are more likely to be attacked by Zero Day since they can be incorporated into botnets and used in spamming and Deniel Of Service attacks. A higher level of system maintenance, audit work and security patching are required to mitigate this risk as far as possible.

    • No physical access. If the machine cannot be reached via software, remote-hands work by Data Centre Engineers is necessary and this can be more costly than recovering from a backup.

    Scripts for monitoring Linux servers

    Tired of regularly logging into our servers to find out if anything is wrong, I wanted to know before something like a spam attack or a disk-eater gets out of hand.

    Wrote some scripts to automate the process and email me some stats. Yes there are tools to do this for you, but then you have to maintain those tools, and sometimes installing them and configuring them on so many servers can be a pain. This system needs only ssh and mutt, lightweight and available in almost all Linux distributions I know.

    This, it’s simply bash installed on one server, and everything is “pushed” fresh to the servers every time it is run, so updates are automatic and easily deployed.

    Design

    The main script checkserver.sh copies (using scp)  the checkservices, checkdf, checkmailq and checkmaillog scripts and config files to the target host, runs them there and emails the output to you.

    On your host servers there is nothing to install except openssh-server. Configure your password-less ssh-keys for all the servers in ~/.ssh/config on your monitoring server with the public keys in /root/.ssh/authorized_keys on the hosts. For help with that see our article: master-ssh-key

    Scripts

     

    To send notification emails you need a mutt wrapper, almost all distributions have mutt in their repos.

    muttcc.sh

    func_muttemail () {

    #### establish current dir
    APPDIR=$( cd “$( dirname “$0″ )” && pwd )
    gotdate=`date +%Y-%m-%d-%H-%M`
    DEBUGLOG=”$APPDIR/functions-email.log”
    echo “ENTERING mutt.sh” | tee -a $DEBUGLOG

    #### ASSIGN VARS
    mailentity=”$1″
    mailsubject=”$2″
    maildatafile=”$3″
    mailentitycc=”$4″
    echo “$mailentity $mailsubject $maildatafile” | tee -a $DEBUGFILE
    #send email
    echo “cc specified: $4″ | tee -a $DEBUGLOG
    if [ x”${mailentitycc}” = x ]
    then
    #the cc is not set so do not include mecc
    mutt -F $APPDIR/muttrc -s “$mailsubject” “$mailentity” < $maildatafile
    echo “no cc specified: $mailentitycc” | tee -a $DEBUGLOG
    else
    mutt -F $APPDIR/muttrc -s “$mailsubject” -c “$mailentitycc” “$mailentity” < $maildatafile
    echo “cc specified: $mailentitycc” | tee -a $DEBUGLOG
    fi

    EX=$?
    echo “$EX”
    echo “##### EXITING mutt.sh #######” | tee -a $DEBUGLOG
    return $EX
    }

    You’ll need to install mutt and have access to an smtp server. Mine is local and needs no authentication. Normally this fill will be the user’s “.muttrc” file, but in the wrapper we have specified it’s location.

    muttrc

    set from = “me@myserver.net”
    set realname = “CHECKSERVER”
    set use_envelope_from = yes
    set smtp_url=smtp://mylocalmailserverhostname.net/
    set ssl_starttls = no

    checkservers.sh

    #!/bin/bash
    APPDIR=$( cd “$( dirname “$0″ )” && pwd )
    . /$APPDIR/muttcc.sh
    SERVERLIST=$APPDIR/server.list
    MAILQTESTS=$APPDIR/checkmailq.txt
    MAILLOGTESTS=$APPDIR/checkmaillog.txt
    MAILQSH=$APPDIR/checkmailq.sh
    MAILLOGSH=$APPDIR/checkmaillog.sh
    DISKDF=$APPDIR/checkdiskdf.sh
    CHECKLOG=/tmp/servercheck.log

    while IFS=”:” read servername ifmailq ifmaillog ifdiskdf serviceslist
    do
    echo “=========== SERVERCHECK STARTS: $servername ==============”
    echo “=========== SERVERCHECK STARTS: $servername ==============” > $CHECKLOG
    echo “———————- disks —————— ” >> $CHECKLOG
    if [ “$ifdiskdf” == “diskdf” ]
    then
    scp $APPDIR/checkdiskdf.sh root@$servername:/usr/local/sbin/
    ssh -n root@$servername /usr/local/sbin/checkdiskdf.sh >> $CHECKLOG
    else
    echo “$servername diskdf not flagged for check” >> $CHECKLOG

    fi
    echo “———————- services —————— ” >> $CHECKLOG
    echo “SERVICEFILE: $serviceslist”
    case $serviceslist in
    no)
    echo “$servername services not flagged for check” >> $CHECKLOG
    ;;
    *)
    scp $APPDIR/checkservices-$serviceslist.txt root@$servername:/usr/local/sbin/checkservices.txt
    scp $APPDIR/checkservices.sh root@$servername:/usr/local/sbin/
    ssh -n root@$servername /usr/local/sbin/checkservices.sh >> $CHECKLOG
    esac

    echo “———————- mailq —————— ” >> $CHECKLOG
    if [ “$ifmailq” == “mailq” ]
    then
    scp $APPDIR/checkmailq.sh root@$servername:/usr/local/sbin/
    scp $APPDIR/checkmailq.txt root@$servername:/usr/local/sbin/
    ssh -n root@$servername /usr/local/sbin/checkmailq.sh >> $CHECKLOG
    else
    echo “$servername mailq not flagged for check” >> $CHECKLOG
    fi

    echo “—————– maillog —————– ” >> $CHECKLOG
    if [ “$ifmaillog” == “maillog” ]
    then
    scp $APPDIR/checkmaillog.sh root@$servername:/usr/local/sbin/
    scp $APPDIR/checkmaillog.txt root@$servername:/usr/local/sbin/
    ssh -n root@$servername /usr/local/sbin/checkmaillog.sh >> $CHECKLOG
    else
    echo “$servername maillog not flagged for check” >> $CHECKLOG
    fi
    echo “=========== SERVERCHECK ENDS: $servername ==============” >> $CHECKLOG
    email_subject=”checkserver results: $servername”
    echo “$email_subject”
    tmp2=$(func_muttemail “myemail@address.com” “$email_subject” “$CHECKLOG” “ccemail@address.com”)
    echo “=========== SERVERCHECK ENDS: $servername ==============”

    done < “$SERVERLIST”

    checkdiskdf.sh

    #!/bin/bash
    /bin/df -h
    exit

    checkmaillog.sh

    #!/bin/bash
    APPDIR=$( cd “$( dirname “$0″ )” && pwd )
    #greps maillotgs and mailqueue fiels for various issues
    MAILLOGTESTS=$APPDIR/checkmaillog.txt
    MAILRESULT=$APPDIR/checkmailresult.log
    MAILRESULTREPORT=$APPDIR/checkmailresult.log
    MAILRESULTTEMP=$APPDIR/checkmailresulttemp.log
    rm $MAILRESULTTEMP
    while IFS=”:” read testname before after
    do
    datum=`date “+%b %d”`
    grep -A$after -B$before “$testname” /var/log/maillog | grep “$datum” >> $MAILRESULTTEMP

    done < $MAILLOGTESTS
    resultlength=$(cat “$MAILRESULTTEMP” | wc -l)
    echo “$resultlength”
    if [ “$resultlength” -gt “1” ]
    then
    echo “issues found”
    echo “mail log errors found. lines=$resultlength.” > $MAILRESULTREPORT
    cat “$MAILRESULTTEMP” >> $MAILRESULTREPORT
    cat “$MAILRESULTREPORT”
    else
    echo “nothing to report”
    fi

    checkmaillog.txt

    refused to talk to me:0:0
    Connection timed out:0:0
    Name service error for name:0:0
    sender non-delivery notification:0:0
    to=<noreply@:0:0
    421 Too many concurrent SMTP connections:0:0
    testing something:0:0

    checkmailq.sh

    #!/bin/bash
    APPDIR=$( cd “$( dirname “$0″ )” && pwd )
    #greps maillotgs and mailqueue fiels for various issues

    MAILQTESTS=$APPDIR/checkmailq.txt
    MAILRESULT=$APPDIR/checkmailresult.log
    MAILRESULTREPORT=$APPDIR/checkmailresult.log
    MAILRESULTTEMP=$APPDIR/checkmailresulttemp.log
    rm $MAILRESULTTEMP

    while IFS=”:” read testname before after
    do
    /usr/bin/mailq | grep -A$after -B$before “$testname” >> $MAILRESULTTEMP
    done < $MAILQTESTS
    resultlength=$(cat “$MAILRESULTTEMP” | wc -l)
    echo “$resultlength”
    if [ “$resultlength” -gt “1” ]
    then
    echo “issues found”
    echo “there is queued mail. lines=$resultlength.” > $MAILRESULTREPORT
    cat “$MAILRESULTTEMP” >> $MAILRESULTREPORT
    cat “$MAILRESULTREPORT”
    else
    echo “nothing to report”
    fi

    checkmailq.sh

    #!/bin/bash
    APPDIR=$( cd “$( dirname “$0″ )” && pwd )
    #greps maillotgs and mailqueue fiels for various issues

    MAILQTESTS=$APPDIR/checkmailq.txt
    MAILRESULT=$APPDIR/checkmailresult.log
    MAILRESULTREPORT=$APPDIR/checkmailresult.log
    MAILRESULTTEMP=$APPDIR/checkmailresulttemp.log
    rm $MAILRESULTTEMP

    while IFS=”:” read testname before after
    do
    /usr/bin/mailq | grep -A$after -B$before “$testname” >> $MAILRESULTTEMP
    done < $MAILQTESTS
    resultlength=$(cat “$MAILRESULTTEMP” | wc -l)
    echo “$resultlength”
    if [ “$resultlength” -gt “1” ]
    then
    echo “issues found”
    echo “there is queued mail. lines=$resultlength.” > $MAILRESULTREPORT
    cat “$MAILRESULTTEMP” >> $MAILRESULTREPORT
    cat “$MAILRESULTREPORT”
    else
    echo “nothing to report”
    fi

    checkmailq.txt

    Connection timed out:1:1
    Connection refused:1:1

    checkservices.sh

    #!/bin/bash

    #!/bin/bash

    APPDIR=$( cd “$( dirname “$0″ )” && pwd )

    #greps maillotgs and mailqueue fiels for various issues

    SERVICETESTS=$APPDIR/checkservices.txt

    RESULTREPORT=$APPDIR/checkservicesresultreport.log

    RESULTTEMP=$APPDIR/checkservicesresulttemp.log

    rm $RESULTREPORT

    rm $RESULTTEMP

    while IFS=”:” read portnumber service

    do result=$( lsof -i:$portnumber | head -n2 > $RESULTTEMP)

    resultlength=$(cat “$RESULTTEMP” | wc -l )

    if [ “$resultlength” -gt “1” ] then

    #service is running there

    echo “OK: $service on $portnumber” >> $RESULTREPORT else

    echo “++++++ NOK: $service on $portnumber +++++++++” >> $RESULTREPORT

    fi

    done < $SERVICETESTS

    cat “$RESULTREPORT”

    checkservices-ALL.txt

    22:ssh
    443:https
    993:imaps
    3306:mysql
    389:ldap

    servers.list

    This is your list of servers to process.

    megatron:nomailq:nomaillog:diskdf:ALL
    magdelena:mailq:maillog:diskdf:ALL

    In my crontab. I don’t want email noise from cron so I use /dev/null 2> &1 to null that.

    30 08 * * * /usr/local/sbin/backup_app/checkservers.sh > /dev/null 2>&1

    Debugging

    Debug email issues with functions-email.log

    For output issues log into your remote servers and run the check*.sh files to see what comes out.

    Room for improvement

    tmp files are used per server in the main loop, so take care not to run this script again until complete or you’ll get nonsense.

    How to choose secure passwords

    Why use secure passwords?

     

    • If your server is exposed to the internet (this will be the case if you have remote access) – you can assume that there will be hundreds of hacking attempts on your server every day. These attacks usually are not coming from humans but by software. The software uses massive dictionaries of human names and potential passwords by trying out millions of combinations until something works. Once the software has access to your account it notifies the human hackers, who’s job it is to hack into your server to access information stored thereon.
    • Don’t assume that you are unimportant, that because your information is not valuable to anyone other than yourself that you will not be a target. Ransomeware systems could encrypt the entire system and demand money in exchange for a decryption service. Hackers also could utilise your system to send spam, an activity that would blacklist your server preventing your real emails from reaching their intended recipients.

      Unexpcted consequences of being hacked

    • How would you like it for all your contacts to be emailed (ostensibly from yourself) with adverts for Viagra or infected file attachments that destroy or damage data on their computers? They may open the attachments because they trusted you.
    • There’s money in this game, big money. A valid email address can be sold to spammers, fetching up to a pound each. If you have any emails in your systems, they are valuable.
    • Data protection liabilities. We all keep names and addresses of our clients and sometimes, under the Data Protection Act we are legally obliged to take care of this information.
    • Infections and Spam: As well as virus infections, slow computers, disrupted networks, you could end up inadvertently hosting objectionable material like as hardcore pornography, viruses or worse.
    • Being cut off by your ISP: Have you heard of “botnets” and “zombies”? A breach of your server like this can end up with your ISP shutting you down for participating in DDOS attacks.

      How do I change my password?

    • In mac/Linux do it through the user manager, or terminal and type: “passwd”  and ENTER. Answer blind (no typing shows) with old pass then new password (enter twice) and your done.
    • In Windows you can press CTRL-ALT-DEL, and click the button “Change Password”

    Password good practise.

    A perfect password is impossible, with quantum computing any password can be cracked eventually. However we are human, and we can only do our best.

    Bear in mind that using an identical password for different services is highly inadvisable, if one of your services are compromised, the others will be too.

    Good passwords can be a pain to remember, design and employ a system for your self and keep it to yourself. A good system will allow you to remember the password easily, and in the event that you end up having to reveal the password to anyone (for instance a friend needing access in emergency, or a technician you have asked to recover you computer) your other passwords will not be compromised.

    Something simple that works well for many is using PREFIX, a BASENAME WITH SPECIAL CHARACTERS and a SUFFIX.  For example, I need to make a password for my gmail account, I might use:

    2017%BaNaNa*1328PATIEgoogligoogli

    1. The PREFIX (2017) is the year I established this account.
    2. The BASENAME is common to all my passwords, I NEVER write it anywhere except at a password prompt. It’s made up of a few things that are easy to remember for me and quick to type, choose numbers and letters that have nothing to do with you personally or legally (as this might allow someone who knows a little about you already to guess). For this reason – NEVER use relative’s birthdays as part of your basename, they are very easy for attackers to discover.
    3. The SUFFIX is something unique to my gmail password only, I always think of this when I see google.
    • Some examples of reasonable (and memorable) passwords:
      7sing*to*me7
      99harryup1-oSCaR
      1Song^^And^^Dance
    • This is what a rubbish password looks like:
      password
      password1
      tuesday
      123
      harry
      10/12/81