Creating high availability Amazon Web Services for Avalanche validator nodes
Phase4:
Phase3
Phase 2
Creating VPC for validator nodes "Ubuntu" on AWS by deploying the Avalanche validator nodes.
Testing end points and estimating delegation fees, network share and capacity.
Monitoring the VPC status with VScout and AVAscan.
Estimating the best monitoring solution for more security and high response to down time.
Creating a storage strategy for recovering staking certificates and key-pair files, in case of crash or necessary migration in case of network down time.
Phase 1
Automation & testing variety of system build setups:
Automating reports by creating scripts wrapping Avalanche APIs.
Building on Vagrant by Hashi group, Virtual-box (Built on Mac OSX hosting Centos & Ubuntu virtual machines through multi tier network setup.
Creating Docker / Kubernetes based virtual machines on OSX through multi tier network setup.
Building on Rapberry Pi 4 for direct access through router.
Research & implementation of FinTech on DeFi
Phase 3
Phase 2
Phase 1
Constructing ETH 2.0 Node validators on AWS Cloud Infrastructure
Creating high availability Amazon Web Services for Cardano Staking Pools and Validation
Phase 1
Phase 2
Phase 3
Phase 4
Migrating ChainLink (data feeding Oracle) to high availability Amazon Web Services
Phase 2
Simulating ChainLink site reliability eco-system
Phase 1
Presentations of new system functionality and usage
Stellar Horizon API Research
Supporting Python development on Stellar Horizon API
Implementing Freqtrade exchange services on AWS
Designing AWS solution for running Freqtrade (Algorithmic trading software based on Python) on EC2 and Docker basis by constructing a secure VPC.
Creating the foundation structure:
Creating sub network to company's main AWS infrastructure and designing security and routing barriers.
Creating EC2 nodes according to the minimum technical specification necessities for ETH2 validator nodes.
Phase 2:
Moving ChainLink local virtual nodes from on (Vagrant / Virtual box) to AWS.
Creating sub network on AWS and configure security and routing
Creating CentOS nodes in the subnetwork
Creating Docker containers Ethereum-client, Postgres RDB & Chainlink locally per node
Running and testing Chainlink reliability eco-system
Developing a monitoring system for stability and upgrades.
Creating high availability Amazon Web Services for Cardano Staking Pools and Validation
Phase-1: Implementing Cardano on Centos-7 using Vagrant for test purposes by compiling the modules and test run the base systems.
Phase-2: Implementing block production & validation configuration on aforementioned local test system.
Phase-3: Implementing the production system on AWS
Phase-4: Production upgrades, monitoring, high availability & quality management
Phase 1:
Constructing ChainLink node on different hard, virtualised, commercial Cloud solutions i.e. AWS & Docker based. Constructing & testing all basic necessities according to guidelines i.e.: Fulfilling requests, Running an Ethereum client.
Performing system maintenance. Connecting to remote database.
Configuration variables.
HTTPS connections.
Best Security and Operating Practices
Python development on Stellar Horizon API
Constructing Stellar core and Horizon API on multiple OSX, Linux platforms & Docker on premise
Implementing on commercial Cloud solutions i.e.: AWS, GCP
Running "Stellar node" and creating validators to participate in Stellar Network
Developing Python API for Stellar for multiple of scenarios
Presentations of new system functionality and usage
Creating the foundation infrastructure
Server configuration,Domain acquisition, SSL certificate,
Creating Mysql database
Configuring SSH,PHP, phpMyadmin services
Word press WooCommerce plugin configurations and setup
Creating the web interface for communications with Kyber decentralised exchange
Creating addresses on Ethereum test net: "Ropsten" for payment transfers and commissions ID.
Testing Multiple ERC-20 payments using source hot and web wallets, i.e.: Metamask, CB for correct invoicing and exchange rates.
Testing the payments of ERC-20 from cold wallets, i.e.: Nano-s, x and Trezor cold wallets having a more detailed analysis of exchange rates, commissions and customer invoicing and reports.
Designing AWS solution for running Freqtrade (Algorithmic trading software based on Python) on EC2 and Docker basis by constructing a secure VPC. Creating the foundation structure:
Constructing test VPC
Creating compute environment, i.e. EC2, EBS, S3, Docker
Creating Mariadb for queries.
Creating AWS ECS & RDS, comparing costs with the later self build system.
Running Freqtrade CLI on the system for queries and update.
Testing remote access for CLI?s through SSH and IAM implementation.
Packing the results in RPM format for Redhat and Centos despatch.
B2B support of multiple architectural variation of e-commerce on heterogeneous Web services for Daimler, Edeka, Footlocker, Aventics, Kodak and Haefler.
My focus was Analysis, Fault repair, architectural and operational documentation, scripting and automation as well as migrating Linux Aapache Mysql Python,PHP model or JAVA applications into modern containers & AWS Cloud technologies
Training SMEs for modern technologies & Cloud
Presenting onsite lecture: AWS IaaS workshop
VPC (Network & content) construction:
Subnets, Route tables, Internet Gateways, DHCP, Elastic IP, Endpoints, NAT Gateways, Network ACL?s,Security groups, VPN, DNS (Route 53)
EC2 Linux Redhat, Centos (Server Compute construction)
Server Instances, Key pairs & SSH access, Images standard and custom, Load balancers, Autoscaling, Accessing CLI and automating through advance shell scripting. Storage S3,EBS (LVM), EFS (NFS 4.0), Security and certificate management using IAM and manually through self created certificates using ssh-keygen, importing into AWS Certificate mnager
Constructing Web sites using Apache, Tomcat with use of selection of RDB i.e. Mysql, Postgres etc.
Demonstrating AWS RDS services for a easier access and administration of popular databases i.e. Mysql, MariaDB, Postgres etc. on top of previously created VPC
Presenting onsite lecture: AWS Container I (Docker, Kubernentes) workshop
Presenting onsite lecture: AWS Container II (Docker, Kubernentes) workshop
Manual creation of docker on previously constructed VPC and EC2 servers demonstrating GIT access locally and on the web and other general GIT functionality for managing codebases.
Manual creation of Kubernetes on top of Docker using previous EC2 servers and making comparisons.
Automatic creation of Docker using AWS Elastic container Service (ECS)
Automatic creation of Kubernetes using AWS Elastic Kubernetes Service (EKS)
Demonstrating AWS code management repository instead of local GIT i.e.: code commit.
Production Support and Development
B2B support of multiple architectural variation of e-commerce on complex heterogeneous Web services for Daimler, Edeka, Footlocker, Aventics, Kodak and Haefler.
Analysis, Fault repair, architectural and operational documentation, scripting and automation.
Technologies
AWS Hybrid migration
Farms of: Redhat Centos, Apache web-server, Tomcat, Oracle, Mysql
Tools: Confluence, JIRA, F1, Zabbix
Languages: Mainly Unix BASH
Third party: Intershop
Designing an coaching internal & off-shore Fujitsu team as well as the Munich-RE US counterpart, for:
(IaaS, PaaS, SasS), AWS Cloud design and build for soft infrastructure (DEV, STG, PROD) & continuous deployment & security implementation.
Product: Wayguard
Implementation of end to end application communication with hybrid local/AWS cloud and government service. This involved multi tier development & test of the application in CI environment and integration in CD that was deployed to production with considering important aspects of security, load-balancing & auto scaling (up and down) and general automation in different segments of development.
Publications: on request
IT FinanzMagazin: on request
More security focused project on AWS, developing a product called Wayguard which was an application that was running on
Android and IOS to locate family and inform Police when they are in trouble and to know their last position at the time of the
incident.
The back-end system was a running on AWS that was connecting to cellular services and Bosch alarm services that was connected to Koeln Police department.
The development team were outsourced using on premise along with their own CI/CD for development of app on IOS and Android.
AWS was consisting of developing on: Single VPC using EC2 servers, with dynamic EBS and some S3 running the back end application that communicated with themobile application.
The application used RDS and some cashing using REDIS (which is called Elasitcache these days) for speed of transaction.
Obviously we used network micro services such as ELB for load balanching Route 56 for DNS services and WAF for firewall as well as Cloudfront on edge location to speed up the data access for mobile.
For production we tested parameters and implemented autoscaling, as well as docker for use in the test environments
Systems Components:
Amazon Services: EC2, EBS, S3, CloudFront, RDS, Elasticache, WAF. Cloudformation
OS-specific: Amazon Linux, Consul, Apache, Nginx, ModSecurity, GIT, LVM.
Scripting: Ruby, AWS-CLI(BASH), JSON, Python
Applications components:
Java, Scala, Websockets/REST (Android/IOS)
Atlasian: Bamboo, JIRA, Confluence Bitbucket.
Leading teams off-shore on-shore and infrastructure team collaboration, Implemented Spacewalk / "Rehat Satellite" system across geographical regions (countries) for software deployment, server fleet management, load leveling, hight availability and security; in a 3 Tier methodology.
Automated registration of ephemeral nodes and the networks of Redhat proxies. Standardized and created sub channels for custom applications and third party packages.
Responsible for research, design and operation of infrastructure for E-commerce software services, (Asia division). Strategic implementation of ephemeral clusters of nodes, hardening security, availability, Geographical distribution and load balancing, Isolating, obfuscating, master back up strategies & replications.
Predominately on open source and Unix based solutions, ie.: Centos/Fedora, (Redhat Linux), Spacewalk, (Redhat Satellite), RPM, YUM, GIT, Ansible, Puppet, Docker, Virtualisation amongst others.
Main languages: C, Shell, Perl, Python.
Atlasian products for build and tracking, ie.: Bamboo, Confluence, Jira
Sonar for software quality assessments of (Java development agile teams)
Evaluating CAD software integration & IT infrastructure, from technical and business feasibility aspects, considering Product Life cycle Management and Product Data Management.
Interviewing software vendors such as Dassault, Siemens, Cenit,etc. and interacting with the automotive designers for the functional design input.
Analysing existing solutions and VW internal possibilities for further developing and modernising.
Adopting architectural designs from different interpretation aspects of existing infrastructure model and data flow.
Devising technical specifications and interacting with VW IT for implementation conformity and timeline feasibility.
Formulating possible scenarios of implementation, in case of technical or other restrictions.
Analysis, design, construction and consulting management in interpretation of technical intricacies; systems and processes for deployment, continuous integration of services/applications; in data centers or Amazon Cloud. These applications were mainly development for construction of maps (data and code) for embedded navigation systems in variety of appliances and integrated software, for companies such as Bing, Yahoo, Daimler, Nokia, etc.
Phase I: Software Configuration management & Release for navigational systems (October 2010 ? January 2013)
Phase II: Design and management of Developement operations on AWS cloud (Consultant) (January 2013 ? December 2013)
Project detail:
Implementing RPM packaging through build severs such as bamboo, Jenkins; into Spacewalk/Red hat satellite repository server for continues integration system into Navteq hosting servers for customers such as Yahoo, Nokia, etc. that involved:
Resolving systems issues due to cross over of the development and different test stages
Resolving crossovers between data builds and code base creations
Creating the concept for end to end continues integrations system for implementing spacewalk (Satellite)
Migration of server due to name convention issues that involved analysis of the systems at OS and the applications levels whilst moving across build server types
Joint venture with Nokia: Supporting the ?L&C? business, for integration of Navteq and Nokia services. This involved working on automation and further development of longer chain of continuous integration due to even more complex environments of multiple tools for a unified objective.
DELIVERY OBJECTIVES, 2012Q4, 2013H1: AMAZON Cloud Hosting of Services
Phase 1 Navteq
Evaluation of the Amazon cloud services such as: EC2, EBS, RDS, VPC, S3 etc. for necessary tools, network bandwidth speed and calculation power of the virtual units.
Creating a prototype deployment of the web services such as NGC6, for testing varied architectural combinations of EC2, S3, EBS, Elastic-ip, route56 etc.
Automating the deployment through shell scripts provisioned through Rubi and json scripts.
Applying multi-threading techniques for tools and scripts for high speed performance using symmetrical multiprocessing power of the Amazon virtual units
Implementation of the final model or models according to deployment scenarios
Advancing the architectural model by incorporating (Cloud Formation, Puppet provisioning)
Possibly incorporating the Spacewalk solution (implemented by my previous services) into the cloud)
Phase 2 Navteq transition to Nokia
Development of automation, software configuration and cloud integration
Feasibility and integration analysis for variety of purposed design architectures
General support of software release and development tools
Phase 3 Nokia division to Here
Providing consultation services to division of Nokia: Here, for portal design on AWS cloud for development operation of Geocoding involving automation and fine-tuning concepts of deployment in to cloud for different stages of testing and some production scenarios. Cost cutting by efficient usage of the AWS cloud resources. Collaboration, hand-over and communication with counter-parts.
Research & implementation of FinTech on DeFi
Tasks:
Phase 3:
DeFi research with Avalanche in focus. Experimenting with CORE wallet functionality and native BTC.b for DeFi strategies on Avalanche C-chain.
Phase 2:
Pivoting focus on Avalanche Web3 DeFi applications built on C-chain with exponentially low costs and lightening high speed of finality whilst keeping decentralised having over 1300 validators.
Phase 1:
Analysis of "Decentralised Finance" Applications on trust-less Ethereum ecosystem that rely on cryptographic economy based on mathematical consensus finalities.
All these systems are protocol based and accessible through many types of APIs and Cloud or local implementation. The main focus of these sub-systems simulates the current banking systems for Lending, Borrowing against collaterals, offering commodities and equities in single or indexed form of products i.e.: ETFs. as well as derivatives and synthetics on multiple forms of Decentralised Exchanges and aggregators.
Some other special services such as liquidity pools and dark pools are also in development and these services are not only decentralised and managed by complex mathematical consensuses but also available to public, due to its trust-less nature. Data feeds through decentralised Oracles that can not be manipulated.
more Projects on request
1991 ? 1993
Newcastle University Australia (Callaghan)
Computer science (Operating systems and Algorithmic problem solving)
1990
Terrigal High School Australia, HSC
3U Maths, 2U Computing, 2U English, 2U Physics, 1U Physical education
Certifications
2002-01
P-Series AIX System Administration Test 19 / IBM
1996-08
Advanced UNIX System Programming / Digital
VMS for advanced users and programmers License 9N-000001-02.0003 / Digital
1995-04
Open VMS for users Digital, License 9N-000001-02.0001 / Digital
CA-INGRES User Interfaces, License QFM.502/250994 / Computer Associates
1995-05
CA Ingres/SQL License QFM.502/250994 / Computer Associates
1994-06
Advanced C programming for Unix / School of Technology and Information Studies TVU London
1992-10
Retail Sales Management Training Programme Stage Two / Tandy Electronics
1992-05
Retail Sales Management Training Programme Stage One / Tandy Electronics
Current Focus
Technical interpretation, Delivery synchronisation & Support, Information acquisition, Market adaptation Devising fundamental systems and infrastructure evaluations for companies of all sizes for modernising their IT structure. Creating design documentation for transformation of current status to modernised model as an incentive for improvement.
Implementing cloud solutions using existing micro-services such as AWS for "platform as service" or building the services from
scratch on private, public or hybrid cloud or simply local data centre.
Discussing specific concerns about the modernisation and automation with internal technical teams. Automate configuration
management using modern tools for Infrastructure as Software, Platform as Services, Software as Service, Continuous Integration
& Deployment using Kanban, from ITIL to Agile Software development methodologies.
Platform, software integration to Unix/Linux derivatives, open source or proprietary systems. Applying fundamental security
practices at OS, Network, Application and transition to cloud providers security tools in specific AWS and GCP amongst other
minor providers
Main Technologies
Chronological focus according to market evolution
Introduction:
With over 20 years experience in dynamic and mission critical environments, have been involved in many IT projects from many technical and tactical perspectives. In recent years having more tendency of involvement in architectural, managerial and team leadership, including off-shore resource management and team creation of specialists; in complex heterogeneous environments, whilst not losing sight on intricate technical aspects i.e.:
Oversee the development of future component architectures and migration plans.
Conceive, design, prototype, and test new methods, algorithms, and models.
Define and enforce appropriate technical standards and procedures.
Lead the research and development of new software products and applications.
Define system, technical and application architectures for major areas of development.
Periodic adaptation to market evolution:
2020:
Particular focus on Fin-tech and DeFi on Blockchains and Cryptocurrencies.
2019:
Disambiguating "Big data" & Artificial intelligence terms, for customers in need of statistical and data analysis
2018:
Focus on: IaaS, PaaS, SaaS, IaC, CM, CI/CD Cloud computing, particularly on: AWS
2017:
Blockchain Decentralised platform development DLT & DApp
2015:
Open (Cloud & Platform & Configuration management) development, i.e. Openstack, Docker, Ansible.
2014:
PDM and PLM in automotive industry for data accumulation and distribution amongst joint venture companies.
2013:
Cloud computing, specifically Amazon AWS, used for Dev-Ops for development of Geo-coding navigational systems.
2010:
Continuous integration & deployment i.e. SVN, GIT, Spacewalk, Jenkins, Puppet, alongside Atlasian Products
Engineering foundation
During my career development, my main focus has become fundamentally UNIX / Linux systems. (SystemV, Berkeley, POSIX, Ansi C ), TCP/IP (network protocol/internet protocol), Shell programming (Bash, Korn, C Shell (awk, sed, grep,regex)) and PERL (scripting/programming) for system, network and data manipulation.
The specific UNIX types that I further specialise are:
Linux distro, specifically Linux Red Hat. I got involved with linux since kernel one at it's scientific stages and through out its evolving into a commercially reliable product mainly on 8086 chip-set family and Powerpc.
IBM RS-6000?s and P-Series (SMIT, NIM,WLM, LPAR, Virtualisation, VIO Micropartitioning, WPAR) AIX
For automation, system performance analysis and security implementation with applied scripting knowledge in SHELL and PERL.
I continue to use Python and Shell as my main strength for scripting and programming languages for operating system, database (alongside with SQL) and network development; amendment and automating amongst other helper scripting languages such as Java script, Golang & Ruby.
Having C as my main strength for system programming, I fall back on this skill for deeper system trouble shooting when necessary.
I have also been involved with many Relational Databases in general such as Oracle, Ingres, Postgres, Mysql from administration installation to data analysis or database programming using SQL, PERL, SHELL and for more extensive programming, using PERL/DBI and embedded C.
History
In 1990, After graduating my high school diploma with concentration on math, physics and computer studies from Terrigal high school in Australia, I?ve entered University of Newcastle Callaghan, Australia studying General computer science with interest in Operating systems and algorithmic problem solving.
In 1993 I received a sponsorship from Tandy electronics to expand my knowledge in business side of technology by getting certified in electronics retail sales management stages one and two, whilst working; instead of pursuing my degree.
In 1995 having my experience in 8086 chip-set and msdos from Tandy, alongside the business exposure and other electronic products in Australia and then UK, I started a job working for Apple UK concentrating on 68000 chip-set and the new PowerPc computers, Apple-OS, application, network support for household and small business customers. Meanwhile I was certified as Apple OS specialist and C programming language, Alongside my growing knowledge in Linux operating system.
During 1995-96, I started working for Bytel, a Telecommunication software house as Support engineer. I proceeded to be a senior consultant and on site trainer for clients, for the main product; subscriber management system. During this work I was sponsored to be certified in Relational database administration and SQL programming and Embedded SQL programming in CA-Ingres and Operating system user, administration and programming in VAX/VMS and DCL. Later it was decided to migrate the systems for more modern architecture. The Operating system of choice had become to be Unix and the RDB of choice Oracle.
I?ve attended administration courses for Unix and Oracle for the major migration. By October 96 I?ve transferred my knowledge of DCL scripting and VMS operating system support to Shell scripting and UNIX administration alongside with SQL interactive update and Embedded SQL programming encapsulated by shell scripts instead of DCL scripts.
Since then I?ve been concentrating on UNIX as my main specialty for network, database, storage and other aspects of IT. Published: 2000
Progression
As an IT consultant, I have been involved in projects from many technical and tactical perspectives. To elaborate, the tactical consultation can be described as helping the project from the time management, budget, resources and team leadership. Technical consultation can be described as the services that fulfilled the need for the project practicality, integration, compatibility and helping with the communication amongst the multiple business parties whom involved in project.
I mainly specialise in Unix systems and its main universal network protocol TCP/IP that have been evidently taking over the IT industry and internet communication from many aspects, with lot more success compared to the rivals. UNIX was once written in assembly language but it was gradually ported and reconstructed in C language which is amongst the fundamentals of my skills. I also have mastered the Shell scripting for system administration related necessities of the Unix operating system. Gradually I've advanced also to Perl programming for having more versatility amongst other operating systems and databases and it's syntax conformity with Shell, C, C++ code while a lot more flexible array handling not to mention it's powerful network and internet construction and amendment ability.
Unix has been produced by different vendors such as IBM AIX, HP UX, Sun Solaris and later on ported originally as a scientific project in Linux until it was commercialized due to it's free source code philosophy that is these days also produced by many different vendors, Redhat and SUSE being one of the top commercially supported amongst the other derivations such as Ubuntu, Debian, Slackware, QNX etc.
I rather catagorise UNIX by the standards of SystemV, Berkeley, ANSI C and POSIX that it has matured in, since the standards are getting more and more integrated and that allows one to correlate and track different derivation of the UNIX independent of it's vendors proprietary standards . APPLE has also taken part in action by introducing OSX which it fundamentally built on Berkeley. There is no doubt that with the modern technologies such as volume manager, file systems and particularly in "virtualisation" there would be tools that would be specific to the operating systems brand. But the concept of utilisation remains fairly uniform.
I've started off my IT career with one of the well known operating systems of the past, VMS and Ingres relational database in 1994. After a year, I proceeded my career by moving on to Unix as operating system and RDB databases such as Oracle, Ingres, MYSQL and POSTGRES etc. During my IT career I have had lots of experience with other minor platforms such as Microsoft products in some heterogeneous systems.
Nevertheless, I keep my concentration on Unix systems and expanding my knowledge on integrating these skills into web business, since pure structural coding is the skill of the past and it is now more important to be able to jockey the existing codes and porting them into modern frame works and modularity.
Experience
Diconium
I was a consultant at Diconium Stuttgart Germany for B2B support of multiple architectural variation of e-commerce on complex heterogeneous Web services for Daimler, Edeka, Footlocker, Aventics, Kodak and Haefler.
My focus was Analysis, Fault repair, architectural and operational documentation, scripting and automation as well as migrating LAMP model or proprietary JAVA applications into modern containers & Cloud technologies.
I confronted some challenges on Docker for port-mapping to outside "public IP" as well as the IPC problems due to CPU architecture or quantity or lack of CPU power on local and on premise machines. I also implemented Docker and "Docker swarms" but eventually to move to mini-cube at the end most of the customers preferred Docker and Kubernetes services on AWS ECS and EKS services for ease of use.
I also started a proof of concept for Terraform for cloud agnostic provisioning solution.
I ran workshops for the SME's on AWS & Google cloud as well as Docker and Kubernetes. Meanwhile I created youtube videos for the technicians, for later reviews and practical tests.
The main focus of training was deployment into cloud while supporting ongoing hybrid, especially for Daimler Benz as the largest customer.
Munich-RE
At Munich-Re an insurance company in Munich, Germany I was Designing an automated migration into modern infrastructure and configuration management instead of manual server migration.
There were hundreds of server in Germany and US that had to be migrated and I've isolated the migration to smaller segments and grew it exponentially after every successful attempt.
We had to modernise configuration management system from a Legacy HPSA.
Predominantly the target systems were Cent-OS Linux so I chose a Redhat Satellite 6.0, mixture with CHEF configuration management in order to also be able to support some Microsoft systems.
At the same time there were three versions of Cent-OS 5,6 & 7 which made is slightly more complex.
Some of the configuration on the legacy system5 and 6 were not available on the Satellite 6.2, so I created a tool to extract that information separately and apply it to a git repository.
This git repo along with CHEF autorun, was used for recreation of the older systems for updating the configuration and applying it into group of servers.
On the other hand there were special cases such as HPSA Policies, for this purpose I created a separate CHEF cookbooks & recipes and compiled RPM?s to apply special configuration policies.
The next level was to replicate and provision the systems on AWS or Azure. I recommended AWS as majority of the systems were Linux based and we had many problems deploying them on Microsoft Azure.
I created proof on concepts for dockerisation and implementation of minikube and kubernetes for applications and introducing the Terraform for a more cloud agnostic provisioning.
AXA
We were developing a product called Wayguard which was an application that was running on Android and IOS to locate family and inform Police when they are in trouble and to know their last position at the time of the incident.
The back-end system was a sophisticated system running on AWS that was connecting to cellular services and Bosch alarm services that was connected to Koeln Police department.
Apart from the CI and CD the live system on the AWS was consisting of developing on: EC2, EBS, S3, Cloudfront, RDS, Elasticache, ELB, WAF, VPC, Route 56 and at the end implementation of autoscaling. as well as docker for the test environments.
Problem one
I cut down SSL certificate costs by introducing wild cards in subject alternative name for multiple subdomains but this caused the whole connections to go down somewhere in AWS.
After chasing it up with AWS support and not being able to rectify the problem, I found a work around by changing the service name ?TCP or UDP? to port number on the ?Elastic load balancer? that it ended up working.
Problem two
We were using WAF for Cloudfront and our product communications were not going through.
I?ve found out that the problems was due to WAF not supporting Websockets protocol that we used for communication on some parts of the product.
Using Nginx as a reverse proxy, I separated the normal communication from web-socket communication and implemented protocol level firewall on websocket channel, compiling a custom made ?Mod-security? module on Nginix.
Riplife
In Spain Malaga I started a project with gaming company called Riplife, indirectly owned by Calvin Ayre.
Since Chinese government has strong censorship, I had to find a way to stop them from tracing servers or shutting down the gaming services.
Using a star topology on Spacewalk proxies, I distributed the servers so it couldn?t be traced back and divided them amongst multiple ISP?s connecting from Taiwan.
When a server was attacked the whole servers connected to that proxy would shutdown and there was not trace back to other branches due to strong protocol obfuscation.
Then another proxy was activated and software were applied through Spacewalk channels.
VolksWagen
Persuading, Audi, Skoda & Porsche into unified CAD Software & datapipelines for communication into the very complex IT infrastructure.
Nokia
When autoscaler wasn?t available I created automatic solution for scaling servers and to use LVM and assign EBS storage upon need and remove them when not needed, for saving costs.
Commendation & References on request
Projects location scope: Australia, UK, Germany
IT OPS: Analysis & Development, Automation, Maintenance & Support, Migration, DR & security
Software Development: Macro Analysis, Statistical/data analysis, Continuous integration, Coding and tools
Main industries: Telecommunication, Automotive, Travel, Finance
Industry pace: Commercial, Maintenance & support, R&D
Sales: Technical interpretation, Delivery synchronisation & Support, Information acquisition, Market adaptation
Creating high availability Amazon Web Services for Avalanche validator nodes
Phase4:
Phase3
Phase 2
Creating VPC for validator nodes "Ubuntu" on AWS by deploying the Avalanche validator nodes.
Testing end points and estimating delegation fees, network share and capacity.
Monitoring the VPC status with VScout and AVAscan.
Estimating the best monitoring solution for more security and high response to down time.
Creating a storage strategy for recovering staking certificates and key-pair files, in case of crash or necessary migration in case of network down time.
Phase 1
Automation & testing variety of system build setups:
Automating reports by creating scripts wrapping Avalanche APIs.
Building on Vagrant by Hashi group, Virtual-box (Built on Mac OSX hosting Centos & Ubuntu virtual machines through multi tier network setup.
Creating Docker / Kubernetes based virtual machines on OSX through multi tier network setup.
Building on Rapberry Pi 4 for direct access through router.
Research & implementation of FinTech on DeFi
Phase 3
Phase 2
Phase 1
Constructing ETH 2.0 Node validators on AWS Cloud Infrastructure
Creating high availability Amazon Web Services for Cardano Staking Pools and Validation
Phase 1
Phase 2
Phase 3
Phase 4
Migrating ChainLink (data feeding Oracle) to high availability Amazon Web Services
Phase 2
Simulating ChainLink site reliability eco-system
Phase 1
Presentations of new system functionality and usage
Stellar Horizon API Research
Supporting Python development on Stellar Horizon API
Implementing Freqtrade exchange services on AWS
Designing AWS solution for running Freqtrade (Algorithmic trading software based on Python) on EC2 and Docker basis by constructing a secure VPC.
Creating the foundation structure:
Creating sub network to company's main AWS infrastructure and designing security and routing barriers.
Creating EC2 nodes according to the minimum technical specification necessities for ETH2 validator nodes.
Phase 2:
Moving ChainLink local virtual nodes from on (Vagrant / Virtual box) to AWS.
Creating sub network on AWS and configure security and routing
Creating CentOS nodes in the subnetwork
Creating Docker containers Ethereum-client, Postgres RDB & Chainlink locally per node
Running and testing Chainlink reliability eco-system
Developing a monitoring system for stability and upgrades.
Creating high availability Amazon Web Services for Cardano Staking Pools and Validation
Phase-1: Implementing Cardano on Centos-7 using Vagrant for test purposes by compiling the modules and test run the base systems.
Phase-2: Implementing block production & validation configuration on aforementioned local test system.
Phase-3: Implementing the production system on AWS
Phase-4: Production upgrades, monitoring, high availability & quality management
Phase 1:
Constructing ChainLink node on different hard, virtualised, commercial Cloud solutions i.e. AWS & Docker based. Constructing & testing all basic necessities according to guidelines i.e.: Fulfilling requests, Running an Ethereum client.
Performing system maintenance. Connecting to remote database.
Configuration variables.
HTTPS connections.
Best Security and Operating Practices
Python development on Stellar Horizon API
Constructing Stellar core and Horizon API on multiple OSX, Linux platforms & Docker on premise
Implementing on commercial Cloud solutions i.e.: AWS, GCP
Running "Stellar node" and creating validators to participate in Stellar Network
Developing Python API for Stellar for multiple of scenarios
Presentations of new system functionality and usage
Creating the foundation infrastructure
Server configuration,Domain acquisition, SSL certificate,
Creating Mysql database
Configuring SSH,PHP, phpMyadmin services
Word press WooCommerce plugin configurations and setup
Creating the web interface for communications with Kyber decentralised exchange
Creating addresses on Ethereum test net: "Ropsten" for payment transfers and commissions ID.
Testing Multiple ERC-20 payments using source hot and web wallets, i.e.: Metamask, CB for correct invoicing and exchange rates.
Testing the payments of ERC-20 from cold wallets, i.e.: Nano-s, x and Trezor cold wallets having a more detailed analysis of exchange rates, commissions and customer invoicing and reports.
Designing AWS solution for running Freqtrade (Algorithmic trading software based on Python) on EC2 and Docker basis by constructing a secure VPC. Creating the foundation structure:
Constructing test VPC
Creating compute environment, i.e. EC2, EBS, S3, Docker
Creating Mariadb for queries.
Creating AWS ECS & RDS, comparing costs with the later self build system.
Running Freqtrade CLI on the system for queries and update.
Testing remote access for CLI?s through SSH and IAM implementation.
Packing the results in RPM format for Redhat and Centos despatch.
B2B support of multiple architectural variation of e-commerce on heterogeneous Web services for Daimler, Edeka, Footlocker, Aventics, Kodak and Haefler.
My focus was Analysis, Fault repair, architectural and operational documentation, scripting and automation as well as migrating Linux Aapache Mysql Python,PHP model or JAVA applications into modern containers & AWS Cloud technologies
Training SMEs for modern technologies & Cloud
Presenting onsite lecture: AWS IaaS workshop
VPC (Network & content) construction:
Subnets, Route tables, Internet Gateways, DHCP, Elastic IP, Endpoints, NAT Gateways, Network ACL?s,Security groups, VPN, DNS (Route 53)
EC2 Linux Redhat, Centos (Server Compute construction)
Server Instances, Key pairs & SSH access, Images standard and custom, Load balancers, Autoscaling, Accessing CLI and automating through advance shell scripting. Storage S3,EBS (LVM), EFS (NFS 4.0), Security and certificate management using IAM and manually through self created certificates using ssh-keygen, importing into AWS Certificate mnager
Constructing Web sites using Apache, Tomcat with use of selection of RDB i.e. Mysql, Postgres etc.
Demonstrating AWS RDS services for a easier access and administration of popular databases i.e. Mysql, MariaDB, Postgres etc. on top of previously created VPC
Presenting onsite lecture: AWS Container I (Docker, Kubernentes) workshop
Presenting onsite lecture: AWS Container II (Docker, Kubernentes) workshop
Manual creation of docker on previously constructed VPC and EC2 servers demonstrating GIT access locally and on the web and other general GIT functionality for managing codebases.
Manual creation of Kubernetes on top of Docker using previous EC2 servers and making comparisons.
Automatic creation of Docker using AWS Elastic container Service (ECS)
Automatic creation of Kubernetes using AWS Elastic Kubernetes Service (EKS)
Demonstrating AWS code management repository instead of local GIT i.e.: code commit.
Production Support and Development
B2B support of multiple architectural variation of e-commerce on complex heterogeneous Web services for Daimler, Edeka, Footlocker, Aventics, Kodak and Haefler.
Analysis, Fault repair, architectural and operational documentation, scripting and automation.
Technologies
AWS Hybrid migration
Farms of: Redhat Centos, Apache web-server, Tomcat, Oracle, Mysql
Tools: Confluence, JIRA, F1, Zabbix
Languages: Mainly Unix BASH
Third party: Intershop
Designing an coaching internal & off-shore Fujitsu team as well as the Munich-RE US counterpart, for:
(IaaS, PaaS, SasS), AWS Cloud design and build for soft infrastructure (DEV, STG, PROD) & continuous deployment & security implementation.
Product: Wayguard
Implementation of end to end application communication with hybrid local/AWS cloud and government service. This involved multi tier development & test of the application in CI environment and integration in CD that was deployed to production with considering important aspects of security, load-balancing & auto scaling (up and down) and general automation in different segments of development.
Publications: on request
IT FinanzMagazin: on request
More security focused project on AWS, developing a product called Wayguard which was an application that was running on
Android and IOS to locate family and inform Police when they are in trouble and to know their last position at the time of the
incident.
The back-end system was a running on AWS that was connecting to cellular services and Bosch alarm services that was connected to Koeln Police department.
The development team were outsourced using on premise along with their own CI/CD for development of app on IOS and Android.
AWS was consisting of developing on: Single VPC using EC2 servers, with dynamic EBS and some S3 running the back end application that communicated with themobile application.
The application used RDS and some cashing using REDIS (which is called Elasitcache these days) for speed of transaction.
Obviously we used network micro services such as ELB for load balanching Route 56 for DNS services and WAF for firewall as well as Cloudfront on edge location to speed up the data access for mobile.
For production we tested parameters and implemented autoscaling, as well as docker for use in the test environments
Systems Components:
Amazon Services: EC2, EBS, S3, CloudFront, RDS, Elasticache, WAF. Cloudformation
OS-specific: Amazon Linux, Consul, Apache, Nginx, ModSecurity, GIT, LVM.
Scripting: Ruby, AWS-CLI(BASH), JSON, Python
Applications components:
Java, Scala, Websockets/REST (Android/IOS)
Atlasian: Bamboo, JIRA, Confluence Bitbucket.
Leading teams off-shore on-shore and infrastructure team collaboration, Implemented Spacewalk / "Rehat Satellite" system across geographical regions (countries) for software deployment, server fleet management, load leveling, hight availability and security; in a 3 Tier methodology.
Automated registration of ephemeral nodes and the networks of Redhat proxies. Standardized and created sub channels for custom applications and third party packages.
Responsible for research, design and operation of infrastructure for E-commerce software services, (Asia division). Strategic implementation of ephemeral clusters of nodes, hardening security, availability, Geographical distribution and load balancing, Isolating, obfuscating, master back up strategies & replications.
Predominately on open source and Unix based solutions, ie.: Centos/Fedora, (Redhat Linux), Spacewalk, (Redhat Satellite), RPM, YUM, GIT, Ansible, Puppet, Docker, Virtualisation amongst others.
Main languages: C, Shell, Perl, Python.
Atlasian products for build and tracking, ie.: Bamboo, Confluence, Jira
Sonar for software quality assessments of (Java development agile teams)
Evaluating CAD software integration & IT infrastructure, from technical and business feasibility aspects, considering Product Life cycle Management and Product Data Management.
Interviewing software vendors such as Dassault, Siemens, Cenit,etc. and interacting with the automotive designers for the functional design input.
Analysing existing solutions and VW internal possibilities for further developing and modernising.
Adopting architectural designs from different interpretation aspects of existing infrastructure model and data flow.
Devising technical specifications and interacting with VW IT for implementation conformity and timeline feasibility.
Formulating possible scenarios of implementation, in case of technical or other restrictions.
Analysis, design, construction and consulting management in interpretation of technical intricacies; systems and processes for deployment, continuous integration of services/applications; in data centers or Amazon Cloud. These applications were mainly development for construction of maps (data and code) for embedded navigation systems in variety of appliances and integrated software, for companies such as Bing, Yahoo, Daimler, Nokia, etc.
Phase I: Software Configuration management & Release for navigational systems (October 2010 ? January 2013)
Phase II: Design and management of Developement operations on AWS cloud (Consultant) (January 2013 ? December 2013)
Project detail:
Implementing RPM packaging through build severs such as bamboo, Jenkins; into Spacewalk/Red hat satellite repository server for continues integration system into Navteq hosting servers for customers such as Yahoo, Nokia, etc. that involved:
Resolving systems issues due to cross over of the development and different test stages
Resolving crossovers between data builds and code base creations
Creating the concept for end to end continues integrations system for implementing spacewalk (Satellite)
Migration of server due to name convention issues that involved analysis of the systems at OS and the applications levels whilst moving across build server types
Joint venture with Nokia: Supporting the ?L&C? business, for integration of Navteq and Nokia services. This involved working on automation and further development of longer chain of continuous integration due to even more complex environments of multiple tools for a unified objective.
DELIVERY OBJECTIVES, 2012Q4, 2013H1: AMAZON Cloud Hosting of Services
Phase 1 Navteq
Evaluation of the Amazon cloud services such as: EC2, EBS, RDS, VPC, S3 etc. for necessary tools, network bandwidth speed and calculation power of the virtual units.
Creating a prototype deployment of the web services such as NGC6, for testing varied architectural combinations of EC2, S3, EBS, Elastic-ip, route56 etc.
Automating the deployment through shell scripts provisioned through Rubi and json scripts.
Applying multi-threading techniques for tools and scripts for high speed performance using symmetrical multiprocessing power of the Amazon virtual units
Implementation of the final model or models according to deployment scenarios
Advancing the architectural model by incorporating (Cloud Formation, Puppet provisioning)
Possibly incorporating the Spacewalk solution (implemented by my previous services) into the cloud)
Phase 2 Navteq transition to Nokia
Development of automation, software configuration and cloud integration
Feasibility and integration analysis for variety of purposed design architectures
General support of software release and development tools
Phase 3 Nokia division to Here
Providing consultation services to division of Nokia: Here, for portal design on AWS cloud for development operation of Geocoding involving automation and fine-tuning concepts of deployment in to cloud for different stages of testing and some production scenarios. Cost cutting by efficient usage of the AWS cloud resources. Collaboration, hand-over and communication with counter-parts.
Research & implementation of FinTech on DeFi
Tasks:
Phase 3:
DeFi research with Avalanche in focus. Experimenting with CORE wallet functionality and native BTC.b for DeFi strategies on Avalanche C-chain.
Phase 2:
Pivoting focus on Avalanche Web3 DeFi applications built on C-chain with exponentially low costs and lightening high speed of finality whilst keeping decentralised having over 1300 validators.
Phase 1:
Analysis of "Decentralised Finance" Applications on trust-less Ethereum ecosystem that rely on cryptographic economy based on mathematical consensus finalities.
All these systems are protocol based and accessible through many types of APIs and Cloud or local implementation. The main focus of these sub-systems simulates the current banking systems for Lending, Borrowing against collaterals, offering commodities and equities in single or indexed form of products i.e.: ETFs. as well as derivatives and synthetics on multiple forms of Decentralised Exchanges and aggregators.
Some other special services such as liquidity pools and dark pools are also in development and these services are not only decentralised and managed by complex mathematical consensuses but also available to public, due to its trust-less nature. Data feeds through decentralised Oracles that can not be manipulated.
more Projects on request
1991 ? 1993
Newcastle University Australia (Callaghan)
Computer science (Operating systems and Algorithmic problem solving)
1990
Terrigal High School Australia, HSC
3U Maths, 2U Computing, 2U English, 2U Physics, 1U Physical education
Certifications
2002-01
P-Series AIX System Administration Test 19 / IBM
1996-08
Advanced UNIX System Programming / Digital
VMS for advanced users and programmers License 9N-000001-02.0003 / Digital
1995-04
Open VMS for users Digital, License 9N-000001-02.0001 / Digital
CA-INGRES User Interfaces, License QFM.502/250994 / Computer Associates
1995-05
CA Ingres/SQL License QFM.502/250994 / Computer Associates
1994-06
Advanced C programming for Unix / School of Technology and Information Studies TVU London
1992-10
Retail Sales Management Training Programme Stage Two / Tandy Electronics
1992-05
Retail Sales Management Training Programme Stage One / Tandy Electronics
Current Focus
Technical interpretation, Delivery synchronisation & Support, Information acquisition, Market adaptation Devising fundamental systems and infrastructure evaluations for companies of all sizes for modernising their IT structure. Creating design documentation for transformation of current status to modernised model as an incentive for improvement.
Implementing cloud solutions using existing micro-services such as AWS for "platform as service" or building the services from
scratch on private, public or hybrid cloud or simply local data centre.
Discussing specific concerns about the modernisation and automation with internal technical teams. Automate configuration
management using modern tools for Infrastructure as Software, Platform as Services, Software as Service, Continuous Integration
& Deployment using Kanban, from ITIL to Agile Software development methodologies.
Platform, software integration to Unix/Linux derivatives, open source or proprietary systems. Applying fundamental security
practices at OS, Network, Application and transition to cloud providers security tools in specific AWS and GCP amongst other
minor providers
Main Technologies
Chronological focus according to market evolution
Introduction:
With over 20 years experience in dynamic and mission critical environments, have been involved in many IT projects from many technical and tactical perspectives. In recent years having more tendency of involvement in architectural, managerial and team leadership, including off-shore resource management and team creation of specialists; in complex heterogeneous environments, whilst not losing sight on intricate technical aspects i.e.:
Oversee the development of future component architectures and migration plans.
Conceive, design, prototype, and test new methods, algorithms, and models.
Define and enforce appropriate technical standards and procedures.
Lead the research and development of new software products and applications.
Define system, technical and application architectures for major areas of development.
Periodic adaptation to market evolution:
2020:
Particular focus on Fin-tech and DeFi on Blockchains and Cryptocurrencies.
2019:
Disambiguating "Big data" & Artificial intelligence terms, for customers in need of statistical and data analysis
2018:
Focus on: IaaS, PaaS, SaaS, IaC, CM, CI/CD Cloud computing, particularly on: AWS
2017:
Blockchain Decentralised platform development DLT & DApp
2015:
Open (Cloud & Platform & Configuration management) development, i.e. Openstack, Docker, Ansible.
2014:
PDM and PLM in automotive industry for data accumulation and distribution amongst joint venture companies.
2013:
Cloud computing, specifically Amazon AWS, used for Dev-Ops for development of Geo-coding navigational systems.
2010:
Continuous integration & deployment i.e. SVN, GIT, Spacewalk, Jenkins, Puppet, alongside Atlasian Products
Engineering foundation
During my career development, my main focus has become fundamentally UNIX / Linux systems. (SystemV, Berkeley, POSIX, Ansi C ), TCP/IP (network protocol/internet protocol), Shell programming (Bash, Korn, C Shell (awk, sed, grep,regex)) and PERL (scripting/programming) for system, network and data manipulation.
The specific UNIX types that I further specialise are:
Linux distro, specifically Linux Red Hat. I got involved with linux since kernel one at it's scientific stages and through out its evolving into a commercially reliable product mainly on 8086 chip-set family and Powerpc.
IBM RS-6000?s and P-Series (SMIT, NIM,WLM, LPAR, Virtualisation, VIO Micropartitioning, WPAR) AIX
For automation, system performance analysis and security implementation with applied scripting knowledge in SHELL and PERL.
I continue to use Python and Shell as my main strength for scripting and programming languages for operating system, database (alongside with SQL) and network development; amendment and automating amongst other helper scripting languages such as Java script, Golang & Ruby.
Having C as my main strength for system programming, I fall back on this skill for deeper system trouble shooting when necessary.
I have also been involved with many Relational Databases in general such as Oracle, Ingres, Postgres, Mysql from administration installation to data analysis or database programming using SQL, PERL, SHELL and for more extensive programming, using PERL/DBI and embedded C.
History
In 1990, After graduating my high school diploma with concentration on math, physics and computer studies from Terrigal high school in Australia, I?ve entered University of Newcastle Callaghan, Australia studying General computer science with interest in Operating systems and algorithmic problem solving.
In 1993 I received a sponsorship from Tandy electronics to expand my knowledge in business side of technology by getting certified in electronics retail sales management stages one and two, whilst working; instead of pursuing my degree.
In 1995 having my experience in 8086 chip-set and msdos from Tandy, alongside the business exposure and other electronic products in Australia and then UK, I started a job working for Apple UK concentrating on 68000 chip-set and the new PowerPc computers, Apple-OS, application, network support for household and small business customers. Meanwhile I was certified as Apple OS specialist and C programming language, Alongside my growing knowledge in Linux operating system.
During 1995-96, I started working for Bytel, a Telecommunication software house as Support engineer. I proceeded to be a senior consultant and on site trainer for clients, for the main product; subscriber management system. During this work I was sponsored to be certified in Relational database administration and SQL programming and Embedded SQL programming in CA-Ingres and Operating system user, administration and programming in VAX/VMS and DCL. Later it was decided to migrate the systems for more modern architecture. The Operating system of choice had become to be Unix and the RDB of choice Oracle.
I?ve attended administration courses for Unix and Oracle for the major migration. By October 96 I?ve transferred my knowledge of DCL scripting and VMS operating system support to Shell scripting and UNIX administration alongside with SQL interactive update and Embedded SQL programming encapsulated by shell scripts instead of DCL scripts.
Since then I?ve been concentrating on UNIX as my main specialty for network, database, storage and other aspects of IT. Published: 2000
Progression
As an IT consultant, I have been involved in projects from many technical and tactical perspectives. To elaborate, the tactical consultation can be described as helping the project from the time management, budget, resources and team leadership. Technical consultation can be described as the services that fulfilled the need for the project practicality, integration, compatibility and helping with the communication amongst the multiple business parties whom involved in project.
I mainly specialise in Unix systems and its main universal network protocol TCP/IP that have been evidently taking over the IT industry and internet communication from many aspects, with lot more success compared to the rivals. UNIX was once written in assembly language but it was gradually ported and reconstructed in C language which is amongst the fundamentals of my skills. I also have mastered the Shell scripting for system administration related necessities of the Unix operating system. Gradually I've advanced also to Perl programming for having more versatility amongst other operating systems and databases and it's syntax conformity with Shell, C, C++ code while a lot more flexible array handling not to mention it's powerful network and internet construction and amendment ability.
Unix has been produced by different vendors such as IBM AIX, HP UX, Sun Solaris and later on ported originally as a scientific project in Linux until it was commercialized due to it's free source code philosophy that is these days also produced by many different vendors, Redhat and SUSE being one of the top commercially supported amongst the other derivations such as Ubuntu, Debian, Slackware, QNX etc.
I rather catagorise UNIX by the standards of SystemV, Berkeley, ANSI C and POSIX that it has matured in, since the standards are getting more and more integrated and that allows one to correlate and track different derivation of the UNIX independent of it's vendors proprietary standards . APPLE has also taken part in action by introducing OSX which it fundamentally built on Berkeley. There is no doubt that with the modern technologies such as volume manager, file systems and particularly in "virtualisation" there would be tools that would be specific to the operating systems brand. But the concept of utilisation remains fairly uniform.
I've started off my IT career with one of the well known operating systems of the past, VMS and Ingres relational database in 1994. After a year, I proceeded my career by moving on to Unix as operating system and RDB databases such as Oracle, Ingres, MYSQL and POSTGRES etc. During my IT career I have had lots of experience with other minor platforms such as Microsoft products in some heterogeneous systems.
Nevertheless, I keep my concentration on Unix systems and expanding my knowledge on integrating these skills into web business, since pure structural coding is the skill of the past and it is now more important to be able to jockey the existing codes and porting them into modern frame works and modularity.
Experience
Diconium
I was a consultant at Diconium Stuttgart Germany for B2B support of multiple architectural variation of e-commerce on complex heterogeneous Web services for Daimler, Edeka, Footlocker, Aventics, Kodak and Haefler.
My focus was Analysis, Fault repair, architectural and operational documentation, scripting and automation as well as migrating LAMP model or proprietary JAVA applications into modern containers & Cloud technologies.
I confronted some challenges on Docker for port-mapping to outside "public IP" as well as the IPC problems due to CPU architecture or quantity or lack of CPU power on local and on premise machines. I also implemented Docker and "Docker swarms" but eventually to move to mini-cube at the end most of the customers preferred Docker and Kubernetes services on AWS ECS and EKS services for ease of use.
I also started a proof of concept for Terraform for cloud agnostic provisioning solution.
I ran workshops for the SME's on AWS & Google cloud as well as Docker and Kubernetes. Meanwhile I created youtube videos for the technicians, for later reviews and practical tests.
The main focus of training was deployment into cloud while supporting ongoing hybrid, especially for Daimler Benz as the largest customer.
Munich-RE
At Munich-Re an insurance company in Munich, Germany I was Designing an automated migration into modern infrastructure and configuration management instead of manual server migration.
There were hundreds of server in Germany and US that had to be migrated and I've isolated the migration to smaller segments and grew it exponentially after every successful attempt.
We had to modernise configuration management system from a Legacy HPSA.
Predominantly the target systems were Cent-OS Linux so I chose a Redhat Satellite 6.0, mixture with CHEF configuration management in order to also be able to support some Microsoft systems.
At the same time there were three versions of Cent-OS 5,6 & 7 which made is slightly more complex.
Some of the configuration on the legacy system5 and 6 were not available on the Satellite 6.2, so I created a tool to extract that information separately and apply it to a git repository.
This git repo along with CHEF autorun, was used for recreation of the older systems for updating the configuration and applying it into group of servers.
On the other hand there were special cases such as HPSA Policies, for this purpose I created a separate CHEF cookbooks & recipes and compiled RPM?s to apply special configuration policies.
The next level was to replicate and provision the systems on AWS or Azure. I recommended AWS as majority of the systems were Linux based and we had many problems deploying them on Microsoft Azure.
I created proof on concepts for dockerisation and implementation of minikube and kubernetes for applications and introducing the Terraform for a more cloud agnostic provisioning.
AXA
We were developing a product called Wayguard which was an application that was running on Android and IOS to locate family and inform Police when they are in trouble and to know their last position at the time of the incident.
The back-end system was a sophisticated system running on AWS that was connecting to cellular services and Bosch alarm services that was connected to Koeln Police department.
Apart from the CI and CD the live system on the AWS was consisting of developing on: EC2, EBS, S3, Cloudfront, RDS, Elasticache, ELB, WAF, VPC, Route 56 and at the end implementation of autoscaling. as well as docker for the test environments.
Problem one
I cut down SSL certificate costs by introducing wild cards in subject alternative name for multiple subdomains but this caused the whole connections to go down somewhere in AWS.
After chasing it up with AWS support and not being able to rectify the problem, I found a work around by changing the service name ?TCP or UDP? to port number on the ?Elastic load balancer? that it ended up working.
Problem two
We were using WAF for Cloudfront and our product communications were not going through.
I?ve found out that the problems was due to WAF not supporting Websockets protocol that we used for communication on some parts of the product.
Using Nginx as a reverse proxy, I separated the normal communication from web-socket communication and implemented protocol level firewall on websocket channel, compiling a custom made ?Mod-security? module on Nginix.
Riplife
In Spain Malaga I started a project with gaming company called Riplife, indirectly owned by Calvin Ayre.
Since Chinese government has strong censorship, I had to find a way to stop them from tracing servers or shutting down the gaming services.
Using a star topology on Spacewalk proxies, I distributed the servers so it couldn?t be traced back and divided them amongst multiple ISP?s connecting from Taiwan.
When a server was attacked the whole servers connected to that proxy would shutdown and there was not trace back to other branches due to strong protocol obfuscation.
Then another proxy was activated and software were applied through Spacewalk channels.
VolksWagen
Persuading, Audi, Skoda & Porsche into unified CAD Software & datapipelines for communication into the very complex IT infrastructure.
Nokia
When autoscaler wasn?t available I created automatic solution for scaling servers and to use LVM and assign EBS storage upon need and remove them when not needed, for saving costs.
Commendation & References on request
Projects location scope: Australia, UK, Germany
IT OPS: Analysis & Development, Automation, Maintenance & Support, Migration, DR & security
Software Development: Macro Analysis, Statistical/data analysis, Continuous integration, Coding and tools
Main industries: Telecommunication, Automotive, Travel, Finance
Industry pace: Commercial, Maintenance & support, R&D
Sales: Technical interpretation, Delivery synchronisation & Support, Information acquisition, Market adaptation