Basin is a basic form backend that lets you collect data from submissions without writing a single line of code.
Amazon Elastic Compute Cloud (Amazon EC2) is a web service provides secure, reliable, scalable, and low-cost computational resources. It gives developers the tools to build virtually any web-scale application.
Amazon EC2 IntegrationsBasin + Amazon EC2
Start Stop or Reboot Instance in Amazon EC2 when New Submission is created in Basin Read More...It's easy to connect Basin + Amazon EC2 without coding knowledge. Start creating your own business flow.
(30 seconds)
(10 seconds)
(30 seconds)
(10 seconds)
(2 minutes)
Basin is a community effort to build a global decentralized database service. It is an open source and the purpose of this project is to provide unlimited storage via IPFS. Last year Amazon announced the Amazon EBS (Elastic Block Store. which is a storage for volumes and it contains a block level storage service having three types of volume. Magnetic, General Purpose SSD (gp2), and Provisioned IOPS (io1. With the help of these volumes one can store their data as a file and as an object and this data can be used in any cloud like AWS, Azure, Google Cloud, etc. So, Basin can serve as the interface between Amazon EBS and IPFS.
Amazon EBS is already integrated with the S3 (Simple Storage Service. The S3 provides simple storage services and it also makes use of IPFS. S3 provides a way to store the data at rest but it does not provide any access control or encryption or authentication. The S3 is often used for web hosting purposes like websites, photos, videos, etc.
The Basin will be an interface between IPFS and Amazon EBS. It will store the data as files in an encrypted form. It will provide the access controls for data using a cryptographic key based on an encryption scheme. It will make use of Amazon’s IAM for authentication purposes.
The Basin will be implemented as a plugin to the existing IPFS daemon that comes built-in with EC2. The API, which is responsible for interacting between the Basin plugin and the IPFS daemon will be provided by the Basin application itself. The complete process of accessing the Amazon EBS volumes from Basin through EC2 will be explained below:
Step 1. First of all, the user will log in into the EC2 that runs on AWS hosted by Amazon. The EC2 should be running on Linux OS. The user will choose an instance type according to his needs. This process will involve creating a new instance on an AWS EC2. For this, one must have an AWS account and then create a new instance using Amazon Machine Image (AMI. which is created using the Amazon Linux AMI and has the pre-installed tools needed to run the Basin plugin on it. In this case, we are assuming that our EC2 instance uses CentOS 7 AMI which has a pre-installed IPFS daemon version 0.4.14. When IPFS starts up, it loads modules in memory so making changes to the code requires restarting IPFS daemon. After launching the instance one can get its IP address using the AWS console. The console allows users to connect to instances from their browser using secure shell (SSH. or remote desktop connection (RDP. using their public and private keys and these keys need to be generated before launching the instance itself.
Step 2. After logging into the EC2 instance it should be updated with all the latest packages that are available in the AWS repository using yum update command in Linux terminal and it should also be updated with the IPFS daemon through git pull command in Linux terminal.
Step 3. Next step is to install the Basin application by following the instructions given here https://basin-project.github.io/docs/installation/aws/index.html . Once installed, there are two ways of interacting with Basin. command line interface (CLI. or Python API. CLI provides more options than Python API does, so we will go with CLI because it’s easier to understand when you’re just starting out with Basin. To start using CLI, we need to cd into the basin directory inside the GOPATH and then run basin commands from there. We can see all available commands by typing basin --help in Linux terminal and then run basin setup --help command to see all available options related to setting up Basin on Amazon EBS volumes. One can find all essential information about running Basin in their official documentation https://basin-project.github.io/docs/running-basin/index.html . We can see that there are two ways of accessing Basin from Amazon EBS volume. one by way of SFTP protocol and another one by way of NFS protocol where NFS is provided by Amazon EBS itself. SFTP is popular among users who want some sort of security over their data whereas NFS allows one to manage permissions on a per-directory basis and so we will go with NFS protocol for our explanation here since it gives us more flexibility to manage access controls on our data while maintaining some level of encryption over our data since encryption is done using user’s public key which is stored in user’s home directory on Amazon EBS volume itself.
To get started with setting up our Basin plugin on Amazon EBS volumes we need to first create a bucket on S3 and then create an IAM policy for user that needs to access our new bucket on S3 which can be found in the AWS console at AWS IAM console under Policies section under Policy Templates section. This policy needs to include s3:GetObject , s3:PutObject , s3:DeleteObject , and s3:ListBucket permissions for that user without granting him any other permissions aside from these four permissions listed here https://docs.aws.amazon.com/AmazonS3/latest/dev/access-policy-variables-s3-actions-full-control.html#s3_permissions . After creating this policy we need to attach it to our newly created bucket on S3 which we can do so by clicking Attach Policy button at Access Control tab shown in S3 console within AWS console at Resources section under Buckets section where we can see our new bucket with default name which we have given it during creation time along with a description if any given during creation time and clicking Show Attachments button given at Attachments section would show us a list of all policies attached to our new bucket on S3 where we need to select our newly created IAM policy and click Update bucket button given at Permissions section shown below Attachments section at Resources section within Buckets section shown within AWS console and then give write permissions to this newly created bucket by clicking Set Permissions button given at Permissions section shown below Attachments section at Resources section within Buckets section shown within AWS console after selecting our newly created bucket from drop down given at Permissions section shown below Attachments section at Resources section within Buckets section shown within AWS console to give write permissions for user that needs read/write access for this particular bucket as shown below screenshot where we can see read permission being given only for user that needs to read objects from this newly created bucket:
Once this is done, we can create our directory structure for storing data within this newly created bucket on S3 where we have called it test directory as shown below screenshot where we need to give appropriate permissions for this directory as well:
Now we need to download our IAM user credentials file through wget command in Linux terminal by downloading them from https://basin-project.github.io/docs/getting-credentials/index.html . We need to give appropriate permissions for this file as well through chmod command in Linux terminal by giving appropriate permissions for this file as well which can be done by running chmod 600 iam_credentials file command in Linux terminal once file is downloaded from https://basin-project.github.io/docs/getting-credentials/index.html . We should now have access credentials ready for use as shown below screenshot:
We are now ready to set up an AWS security group through which we will allow packets originating from EC2 instance that runs our Basin plugin onto AWS network so that there won’t be any major security issues because nobody wants unauthorized traffic from outside into their network and so we can do so through security groups under Network & Security section within Security Groups section within Network & Security tab shown on bottom left hand side panel within AWS console where we can apply TCP rules to port 22 (SSH. so that corresponding port would be opened for our EC2 instance only as shown in above screenshot where we can see TCP rule number 2 applied for port 22 only for instance running our Basin plugin so that there won’t be any unauthorized traffic coming into network from outside source such as hackers and others where ports 8080 , 9001 , 80 , 443 , 5500 , 50070 , 50075 , 50076 , 50077 , 8080 , 9001 , 80 , 443 , 5500 , 50070 , 50075 , 50076 , 50077 are allowed through firewall only for EC2 instance that runs our Basin plugin as shown in above screenshot
The process to integrate Basin and Amazon EC2 may seem complicated and intimidating. This is why Appy Pie Connect has come up with a simple, affordable, and quick solution to help you automate your workflows. Click on the button below to begin.