We'll send you an email when the integration is ready and keep you informed on all the latest Connect updates.
Facebook Shops is a Facebook application that allows Facebook users to open their own online store within Facebook. With over a billion users, you'll reach more customers in a matter of minutes than you can on your own!
MonkeyLearn is a text analysis platform that helps you identify and extract actionable data from a variety of raw texts, including emails, chats, webpages, papers, tweets, and more! You can use custom tags to categorize texts, such as sentiments or topics, and extract specific data, such as organizations or keywords.
Monkey Learn IntegrationsIt's easy to connect Facebook Shops + Monkey Learn without coding knowledge. Start creating your own business flow.
(30 seconds)
(10 seconds)
(30 seconds)
(10 seconds)
(2 minutes)
Facebook Shops? How do they work?
Facebook Shops allow your customers to purchase your products with the ease of an online store. Facebook will match your products to your customers, create a custom landing page for them, and process their payments.
Customers can enter the Facebook stores using their mobile devices or laptops or desktops. They simply click on the link that you provide them to gain entry into your store. The product will then be available for purchase right from within a Facebook window. This is a great way to get your customers to spend more time on Facebook, spending more money while they are there.
MonkeyLearn?
MonkeyLearn is a free top that lets users analyze text in order to extract meaningful information from it. It has several classes that make it easy to analyze any text document. In this case, we’ll use the Text Classifier to organize our data based on topics.
In order to do this, our first step is to upload our text files to the MonkeyLearn platform. Here, we’ll upload our Excel file which contains a list of all of our customers, their email addresses, and their purchased products. We’ll also upload a .txt file with all of our customer’s posts from Facebook.
In order to use the data from our customers’ Facebook posts, we’ll create a new Python script called getPosts.py.
In order to import the FB Data from the CSV file, we’ll need to use Pandas, so we’ll import it before we continue:
import pandas as pd 1 2 import pandas as pd
We’ll then change the directory to our directory where we stored our data:
cd ~/Data/facebookData/ 1 cd ~ / Data / facebookData /
Then, we’ll read in the file:
import csv reader = csv.reader(open('facebook_posts.csv', 'rb'), delimiter=','. 1 2 3 import csv reader = csv . reader ( open ( 'facebook_posts.csv' , 'rb' . , delimiter = ',' )
We’re going to split our CVS file by line:
for row in reader. lineList = row.strip(.split(','. 1 2 3 4 for row in reader . lineList = row . strip ( . . split ( ',' )
We only want the “email” cpumn, so we’ll set it equal to cpumn number 0:
email = lineList[0] 1 email = lineList [ 0 ]
Next, we’ll read in our Excel file with our customer purchases:
import csv reader = csv.reader(open('facebook_customers_purchases.csv', 'rb'), delimiter=','. 1 2 3 import csv reader = csv . reader ( open ( 'facebook_customers_purchases.csv' , 'rb' . , delimiter = ',' )
We’re going to split our CVS file by line:
for row in reader. lineList = row.strip(.split(','. 1 2 3 4 for row in reader . lineList = row . strip ( . . split ( ',' )
And just like before, we only want the “email” cpumn, so we’ll set it equal to cpumn number 0:
email = lineList[0] 1 email = lineList [ 0 ]
Now, we’ll combine the list of emails from both of our CSV files into one list:
allEmails = [] for i in range(len(lineList)). email = lineList[i][0] allEmails.append(email. 1 2 3 4 5 6 7 8 9 10 allEmails = [ ] for i in range ( len ( lineList . . . email = lineList [ i ] [ 0 ] allEmails . append ( email )
Next, we’ll open up our files for MonkeyLearn:
filePath1 = "../../files/facebookPostsTextClassifier-1.0.zip" filePath2 = "../../files/facebookCustomersProductsTextClassifier-1.0.zip" url1 = urlopen(filePath1. url2 = urlopen(filePath2. json1 = json.load(url1. json2 = json.load(url2. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 #!/usr/bin/env python3 #coding:utf8 from bs4 import BeautifulSoup import pandas as pd import csv import numpy as np import urllib2 from bs4 import BeautifulSoup import pandas as pd import csv import numpy as np import urllib2 from datetime import datetime from datetime import datetime from io import BytesIO import re import matplotlib.pyplot as plt from matplotlib import rcParams as rcparms # Importing packages import requests # Making HTTP request import argparse # Parse command line arguments def getDataSet(). url1 = "http://www3.monkeylearn.com/datasets/103/" url2 = "http://www3.monkeylearn.com/datasets/104/" response1 = requests.get(url1. response2 = requests.get(url2. return response1.content def parseCommandLine(). parser = argparse.ArgumentParser(. parser.add_argument("-x", "--xlsxfile", type="str", help="path to excel file". parser.add_argument("-y", "--txtfile", type="str", help="path to txt file". parser.add_argument("-z", "--zipfile", type="str", help="path to zip file". args = parser.parse_args(. return args def getFBData(filename). dataSet = [] dataSet1 = [] fbSummaryFileName = filename + '.fb_summary' response = requests.get('https://graph.facebook.com/' + filename + '/insights/export', headers={'Accept':'application/json'}. data1 = response.json(. df1 = pd.read_csv(response['content'], encoding='utf-8'. dataSet1 = df1['data'] fbPostsFileName = filename + '.fb_posts' response2 = requests.get('https://graph.facebook.com/' + filename + '/posts?limit=100&after=' + str(datetime.now(). + '&access_token=' + getAccessToken()['access_token'], headers={'Accept':'application/json'}. dataSet2 = response2['content'] df2 = pd.read_csv(response['content'], encoding='
The process to integrate Facebook Shops and Monkey Learn may seem complicated and intimidating. This is why Appy Pie Connect has come up with a simple, affordable, and quick spution to help you automate your workflows. Click on the button below to begin.