VerifiMe Security Policy
3 January 2024
Trust and security are foundational elements of VerifiMe.
Overview
VerifiMe is a digital platform that facilitates identification transactions.
VerifiMe delivers a variety of services through mobile applications, websites and associated products. VerifiMe contracts with businesses that need to identify customers, employees or other entities to fulfil their own due diligence and security needs, or to comply with legislation, in particular AUSTRAC requirements relating to AML/CTF compliance.
Our objective is to reduce the need to parley or store underlying identification documents to facilitate verification transactions, and to give transparency to customers about who has their information and how it is being used.
Security and Privacy practices are foundational to VerifiMe. This white paper is meant to help people and entities engaging with the VerifiMe platform to understand our relevant data flows and security posture. It also outlines the security posture that we are contractually committed to maintaining.
Data Types & Ownership
Here we outline the types of data typically involved in our services, the ownership of that data and the responsibilities. See our privacy policy on the website for detail.
Customer Data
VerifiMe typically obtains identification data from customers that need to be identified by a business. We typically then enrich this data by checking it against Government and public authority records, via other publicly available data and by having an identity verification service confirm the validity of the data.
VerifiMe has three use-cases for customer data:
1. we create a “zero-knowledge” proof (also known as a “token”) that can be shared (with customer permission) with businesses that need to identify a customer; and
2. we pass the tokens representing the zero-knowledge proof of the data through a configurable rule’s engine, that assesses whether the data points meet a business’ identification requirements; and
3. in some cases, we store images of the underlying record so that with enough lead time they can be accessed by AUSTRAC for any auditing requirements.
Organisation Data
VerifiMe monetises its services by using the data it collects to compare that data against a business’ AML/CTF program or other relevant program adopted to meet an organisation’s compliance obligations.
In order to do so, VerifiMe collects details about an organisation’s compliance program and uses that to configure a rule set to compare customer data against.
VerifiMe may also obtain transaction information (or other customer data) from businesses relevant to a customer, where that information is relevant to the compliance landscape of a particular business.
Our Security Implementation and Processes
All security implementation and processes described here apply to any data that we collect.
Hosting
How we set up our cloud infrastructure in line with best practices with the world's most trusted hosting provider:
Amazon Web Services.
VerifiMe uses AWS for its hosting application workload needs, including:
• Elastic Computing Cloud (EC2)
• Elastic Container Service (ECS)
• Simple Storage Services (S3)
• Relational Database Service (RDS)
• S3 Glacier
AWS operates on a shared responsibility model. AWS is responsible for the security of global infrastructure and foundational services. Customers are accountable for securing their data, managing access, and ensuring the security of applications and configurations within the AWS environment. This collaborative approach ensures a secure cloud computing environment. See https://aws.amazon.com/compliance/shared-responsibility-model/ for further details.
Vercel
VerifiMe uses Vercel for hosting front-end applications.
Host Standards
All hosts running the application workload are managed by AWS. AWS is responsible for patching and maintenance of the hosts.
Host security
How the server is set up to:
• Prevent attacks.
• Minimize the impact of a successful attack on the overall system.
• Respond to attacks when they occur.
All hosts are completely managed by AWS, with no direct access by personnel outside of AWS.
Database (RDS) only allow access via AWS KMS Session Manager to approved administrators.
Host encryption - Determines whether the host is ready to accept key material. When enabled, core dumps are always encrypted.
All hosts are completely securely managed by AWS.
Network encryption - Determines whether the host is ready to accept key.
All network services and communication material. When enabled, core dumps are always encrypted using TLS 1.3.
Patch Management - The process of distributing and applying updates to software. Often needed to fix errors or bugs.
AWS manages security patching of the hosts.
Physical Security - The physical infrastructure that protects where the data is stored.
AWS provides physical security including 24x7 onsite security, CCTV, multi-factor access control and various other security protocols which exceed most industry standards. Further information can be found on Amazon’s website: https://aws.amazon.com/compliance/data-center/controls/
Data Locality - Where the data is stored
All personally identifiable information is stored locally in the region of the customer and participants.
Network Security
Any activity designed to protect the usability and integrity of the network and data. Effective network security manages access to the network. We use security groups to manage traffic between groups and.
All databases are in secure subnets which only have incoming network access from another private subnet that hosts our server instances. All public internet traffic is routed through a public subnet which hosts the load balancers and directs traffic to our ECS instances.
Configuration Monitoring - Process that tracks and monitors changes to a software system to establish consistency of a product
Configuration monitoring is systematically upheld through the integration of advanced logging and monitoring tools. Sumo Logic is employed to analyse and manage application and access logs, offering a comprehensive view of system behaviour. New Relic contributes to configuration monitoring by providing real-time insights into application performance, enabling proactive identification of any deviations. AWS CloudTrail, on the other hand, plays a pivotal role in tracking and recording AWS infrastructure changes. Together, these tools form a robust configuration monitoring ecosystem. Sumo Logic and New Relic contribute to application-specific monitoring, while AWS CloudTrail ensures visibility into any changes at the infrastructure level. This combination ensures that our system's configurations remain transparent, deviations are promptly identified, and the overall health and integrity of our environment are maintained.
Firewall - Filters, monitors, and blocks HTTP traffic to and from web services
All traffic passes through the AWS Application Load Balancer to reach the internal ECS cluster. Non-HTTP traffic is intentionally dropped as the ALB is configured to filter out non-application HTTP traffic. Moreover, stringent security measures are in place, where all API access mandates the presence of a valid 'Bearer' authentication token; without which, the requests are automatically rejected. These measures collectively ensure a controlled and secure flow of traffic to the internal ECS cluster, reinforcing the integrity of our system architecture.
Email Security
DMARC, SPF and DKIM is enabled on all supported mail servers and domains to ensure phishing and email spam is not possible from VerifiMe domains.
Key Management
All keys used for encryption are managed in AWS KMS to allow for easy key rotation, access control management and audit purposes.
Human Resource Security
How we ensure that all employees (including contractors and any user of sensitive data) are qualified to do so.
Security & Privacy awareness training - Training employees on cyber security and privacy
All employees conduct security and privacy training as part of their onboarding process. There is regular retraining and updating of processes to ensure all employees are well versed in security and privacy policies.
Security & Privacy Team
There is a dedicated privacy officer and security team to manage all related issues and reports.
Background Checks
All employees are subject to a 3rd party background check during onboarding.
Access Control
A security technique that regulates who or what can view or use resources in a computing environment to minimize the risk to the business.
Employee Accounts- Used to grant access to VerifiMe team members.
AWS access for developers is via federated sign on to a dedicated identity account which allows developers to assume appropriate roles for the tasks and environment they require, all credentials are short-lived, and permissions are granted through pre-set roles.
User Accounts - An identity created for a person in order to use the software Participants can authenticate with our services using AWS managed authentication backend.
Multi-Factor Authentication
An electronic authentication method in which a user is granted access to a website or application only after successfully presenting two or more pieces of evidence All internal tools for employees enforce MFA and where 3rd party providers offer MFA, we enforce it.
Account Termination All VerifiMe user accounts are disabled or deleted immediately on their termination date.
Least Privilege
The concept and practice of restricting access rights for users, accounts, and computing processes to only those resources absolutely required
Access to VerifiMe data and internal systems is configured with role-based access control using the Least Privilege approach.
Secure Development Lifecycle
How we ensure we are building VerifiMe according to standard security best practices.
Software Architecture
The organization of a system including all components, how they interact with each other, the environment in which they operate, and the principles used to design the software.
The architecture is a classical 3 tier system with Web based frontend securely communicate with the Restful API backend by using HTTPS to encrypt data in transit and implements Content Security Policy to mitigate against cross-site scripting (XSS) attacks.
The API backend ensures secure access by enforcing authentication and authorization by requiring API to be authenticated with industry-standard protocol like OAUTH 2.0 and JWT. The system also implements role-based access control to ensure users have appropriate permissions.
API inputs are validated and sanitized to prevent injection attacks such as SQL injections.
API persists data into an AWS managed database (Postgres RDS). The data at rest is encrypted with database encryption feature. Access controls are strictly enforced by allowing only authorized users and application. Regular reviews and updates user privileges to follow the principle of least privilege.
API also persist user supplied data in cloud-based object storage (AWS S3). The data is encrypted and fine grain access control is strictly enforced. Secure upload/download is achieved with pre-signed URLs to control access to stored objects.
Security groups are configured to restrict inbound and outbound traffic to the application hosting environment with regular security audit and vulnerability assessment.
Regular scanning and update of dependencies are performed to ensure software dependencies do not contain known vulnerabilities.
The design principles are as follows.
1.Security by design: incorporate threat modelling during the design to identify potential security risks and mitigation strategies. Adheres to secure coding practices to minimize common vulnerabilities. Code review process is strictly enforced for any changes.
2.Data privacy: Strict data access policies are part of system design to protect sensitive information and compliance with data protection regulations.
3.Incident response plan: Establish an incident response plan for timely detection, containment and recovery from security incidents.
4.Continuous monitoring: implements continuous monitoring tools to detect and respond to security event in real-time. Logging and auditing mechanisms is implemented to track and analyse system activity for potential security incidents.
Team- The team who builds and manages our software stack
Our engineering team is comprised of permanent full/part-time employees. On occasion, contractors are used to perform limited project-based work under the close supervision of our permanent team.
Build and Deploy
Build means to compile the project. Deploy means to publish the project. In a continuous deployment process, code changes trigger an automated pipeline that streamlines the entire deployment lifecycle. Upon a commit to the repository, a build process is initiated, generating Docker container images. These images are then pushed to Amazon Elastic Container Registry (ECR), a fully managed container registry. Subsequently, the deployment process is orchestrated through infrastructure as code (IaC), ensuring consistency and reproducibility. The ECS (Elastic Container Service) cluster on Amazon Web Services is dynamically updated with the new container images, enabling a seamless and controlled rolling deployment. This continuous deployment approach optimizes development workflows, enhances deployment reliability, and fosters rapid and efficient delivery of software updates to the production environment.
A rolling deployment strategy is employed within the ECS (Elastic Container Service) cluster to guarantee continuous availability of the application. This strategy involves gradually updating instances of the application while maintaining a specified level of service, preventing downtime.
By adopting this rolling deployment strategy, users and clients experience uninterrupted service, and the application remains available throughout the update process. This approach minimizes the impact on end-users and guarantees a seamless deployment experience while maximizing the system's reliability and availability.
Infrastructure Configuration - How we set up and manage our Hosting Infrastructure
Infrastructure changes are controlled via CI/CD with AWS CDK to ensure no misconfiguration and peer review of infrastructure changes. All configurations are tested through various environments before promotion to production accounts.
Testing - A method to check whether the actual software product matches expected requirements and to ensure that the software product is defect-free.
Our build process includes automated unit testing and static analysis as well as automated UI testing.
Environments
The collection of hardware and software tools a developer uses to build software systems. We have multiple development environments which are deployed on separate infrastructures and all production data is isolated from all development workloads.
3rd Party Penetration testing
Independent auditing and testing of systems, websites, and applications for vulnerabilities.
We engage independent 3rd party security auditing and testing services from reputable vendors annually.
3rd Party Cloud security testing
We engage an independent 3rd party who conducts routine cloud security testing using the AWS Well-Architected Framework and CIS Benchmarks. By aligning with the AWS Well-Architected Framework, they ensure their cloud architecture is optimized for security, reliability, and performance. Additionally, adherence to CIS Benchmarks provides a standardized and industry-recognized set of best practices for securing their cloud resources. This proactive approach enables the organization to identify and address security vulnerabilities, ensuring a robust and resilient cloud environment that aligns with industry standards and best practices.
Monitoring
Overseeing the entire development process from planning, development, integration and testing, deployment, and operations.
Vulnerability Scanning - A process to assess computers, networks or applications for known weaknesses.
All of our images undergo thorough scanning for known vulnerabilities utilizing AWS ECR, and deployment is automatically halted in the presence of any identified issues. Additionally, we leverage GitHub's dependency scanning service to detect and promptly address vulnerabilities within our software dependencies, ensuring a secure and resilient development environment.
Continuous Monitoring - A process to detect application performance issues, identify their cause and implement a solution before the issue leads to compliance issues and security threats.
The software system implements a robust continuous monitoring strategy through a synergistic blend of leading tools. CloudTrail is employed for comprehensive AWS resource monitoring, tracking user activity and API usage. New Relic APM (Application Performance Monitoring) provides real-time insights into application performance, ensuring optimal functionality. Frontend user experience monitoring tools enable the tracking of user interactions, identifying and addressing issues from the end-user perspective. Sumo Logic is utilized for in-depth analysis of application and access logs, offering valuable insights into system behaviour and potential security incidents. This combination of CloudTrail, New Relic APM, frontend user experience monitors, and Sumo Logic forms a comprehensive monitoring ecosystem, ensuring proactive identification and resolution of performance issues and security concerns throughout the software system.
Intrusion Detection
Monitors traffic moving on networks and through systems to search for suspicious activity and known threats, sending up alerts when it finds such items. We have alarms configured to notify the security team of any suspicious activity. These are based on the Benchmark defined by the Centre for Internet Security (CIS) Amazon Web Services Foundations and leverage CloudWatch Alarms & Guard Duty monitoring.
Availability & Continuity
How we ensure we can handle an incident and get the system back up and running as fast as possible.
Maintenance Window
A scheduled outage of services over a digital platform for the sake of planned changes, upgrades and/or repairs. All deployments are blue/green resulting in zero downtime however we reserve the right to perform scheduled maintenance with downtime if required and with 72 hours’ notice.
Backup
A copy of important data that is stored on an alternative location, so it can be recovered if deleted or it becomes corrupted.
Redundancy - When the same piece of data is stored in two or more separate places and is a common occurrence in many businesses.
Databases and production workloads are distributed across multiple AZ (availability zones) for automated failover.
Governance
How we control and direct our approach to security
Auditing
We have a read-only audit account in AWS that tracks all activity across our environments. We use AWS CloudTrail to log all activity into this separate audit account which administrators have access to for auditing purposes.
Risk Assessment
The security and privacy teams meet regularly to review any changes in the risk profile of our services and to develop remediation plans to ensure appropriate risk management.
Policies
All relevant policies are available publicly or on request and are reviewed annually by the executive team. These include but are not limited to: