JSPM

  • Created
  • Published
  • Downloads 19
  • Score
    100M100P100Q53296F
  • License GPL-3.0

Access Twitter data without an API key.

Package Exports

  • scrape-twitter

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (scrape-twitter) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

scrape-twitter Build Status npm version

🐦 Access Twitter data without an API key

This module provides command line interfaces to scrape: profiles, timelines, connections, likes, search and conversations.

It also exposes both streams and a promise returning function to help accessing Twitter in your own applications.

Real-time firehoses can be created using the companion module monitor-head-stream.

Features

  • Get Twitter data without being required to configure an API key.
  • Twitter can't constrain access as easily as they can to an API or an individual API key. Any constraints introduced would apply to their public site. A scraper can be fixed; you are no longer beholden to Twitter.
  • Grab timelines, whole conversations, profiles, connections, likes, etc.
  • Automatically pages to fetch all tweets.
  • Provides metadata on how tweet replies are linked together. e.g. isReplyToId

Example

Get profile

$ scrape-twitter profile sebinsua
# ...

Get timeline

$ scrape-twitter timeline nouswaves
# ...

Get likes

This command requires a valid login. It will check for the following environment variables: TWITTER_USERNAME, TWITTER_PASSWORD, TWITTER_KDT. But can also pick these up from a dotenv file at the path ~/.scrape-twitter. The first time you login you will be asked to store the TWITTER_KDT - this is used by Twitter to recognise your device.

$ scrape-twitter likes sebinsua
# ...

Get connections

This command also requires a valid login.

$ scrape-twitter connections sebinsua --type=following
# ...

Get conversation

$ scrape-twitter conversation ctbeiser 691766715835924484
# ...
$ scrape-twitter search --query "from:afoolswisdom motivation" --type latest
# ...

Get list

$ scrape-twitter list nouswaves list
# ...

JSON interface plays nicely with CLI tools like jq, coreutils/gshuf and terminal-notifier

For example, a MOTD-like script might contain:

scrape-twitter search --query="from:afoolswisdom knowledge" | jq -r '.[].text' | gshuf -n 1 | terminal-notifier -title "Knowledge (MOTD)"

Install

With yarn:

yarn global add scrape-twitter

With npm:

npm install -g scrape-twitter

API

new TimelineStream(username: string, { retweets: boolean, replies: boolean, count: ?number })

Create a ReadableStream<Tweet> for the timeline of a username.

new LikeStream(username: string, { count: ?number, env: process.env })

Create a ReadableStream<Tweet> for the likes of a username.

new ConnectionStream(username: string, type: 'following' | 'followers', process.env)

Create a ReadableStream<UserConnection> for the connections of a username.

new ConversationStream(username: string, id: string, { count: ?number })

Create a ReadableStream<Tweet> for the conversation that belongs to a username and tweet id.

new ThreadedConversationStream(id: string)

Create a ReadableStream<Tweet> for the thread that belongs to a tweet id.

new TweetStream(query: string, type: 'top' | 'latest', { count: ?number })

Create a ReadableStream<Tweet> for the tweets that match a query and type.

new ListStream(username: string, list: string, { count: ?number })

Create a ReadableStream<Tweet> for the username's list.

getUserProfile(username: string)

Get a Promise<UserProfile> for a particular username.