Suppose your password consists of upper and lowercase characters, numbers and spaces. That gives you an alphabet of 2*26+10+1, or 63, possible characters to choose from. That’s just a tad less than 2^6, so we’ll fudge and say that each character gives you 6 bits of entropy. If you generate a password with 12 random characters from this alphabet, it will have 12 * 6, or 72 bits of entropy.
How good is that?
That was the question that kept bugging me. Just how strong is any number of bits of entropy How long would a brute force attack take that guesses every possible hash of 12 characters chosen as I describe. Even better, what would it cost an attacker?
Although I haven’t tried it, I learned of a tool called “Hashcat” for guessing password hashes. I found this article that describes running Hashcat on AWS with it’s most powerful virtual machine, the p3.16xlarge instance, costing a bit under $25/hr. According to the benchmark used in the article, the instance was able to generate about 60,000,000 SHA-256 hashes per second, the same hashing algorithm that Keepass uses.
Now the math:
One instance can generate 3600 * 60*10^6 SHA-256 hashes in an hour. That’s approximately 2^37.65. So it would take a half hour on average to guess a password that has 37.65 bits of entropy.
Let’s say your password has 72 bits of entropy. That means it would take 2^(72 - 37.65), or over 2^33 times that long at $25/hr. (Or you could pay for 2^34 instances and have the answer in no more than an hour.)
Either way, that’s about $215 trillion, on average for the world’s fastest commercially available computing platform to guess a password with 72 bits of entropy, with a worst case scenario of $430 trillion. For comparison, world GDP was estimated at $80 trillion for 2017.
The article also states that you might be able to get a “spot request” for 50%-80% less. OK, so now we’re talking a worst case scenario of only $86 trillion.
This Hashcat article describes only how expensive your password would be for attacker with today’s fastest commercially available computing platform. We might assume that the intelligence services of an advanced country like the US or China might have considerably more computing resources at its disposal. That might be something a rival global power or terrorist organization might need to consider.
Edward Snowden was quoted as saying that “serious” encryption requires at least 100 bits of entropy for the foreseeable future. He might have been thinking of governments, banks, military organizations. For ordinary private citizens who need to keep a few savings accounts and personal documents safe from criminals if their laptops got lost or stolen, 72 bits should be sufficient if my analysis holds. If you are unsure, you can always make your password longer. Remember, each additional character adds just a little under 6 bits of entropy, increasing the cost to an attacker by a factor of just under 64. (Curiously, Apple, and some financial websites put limits on your password length and character choices, hobbling your ability to secure your personal data.)
Some pundits cite “Moore’s Law” as a factor that will weaken passwords over time. Moore’s Law postulates that the number of transistors that can fit into a chip of a given size doubles every two years (some say this occurs every 18 months). Moore’s Law is not really a law at all but an historic observation that has nevertheless remained remarkably consistent over the past half century. Doubling the number of transistors does not necessarily double processing speed. There are other factors such as heat dissipation and quantum tunneling that put a limit on processor and memory speeds despite the density of transistors long before transistor density might reach the ultimate theoretical boundary set by the Planck length.
Even if processing speed and memory size continues to double, current encryption technology can keep up simply by adding one or two bits of entropy every two years. We’ll just have to put up with longer passwords. The Electronic Frontier Foundation (EFF) and other organizations publish long lists of curated words that can be randomly selected by rolling casino dice. Five rolls of a single die can select from a list of 6^5, or 7776 words. Thus each word has an entropy of roughly 13 bits. Six such random words have an entropy of 6 x 13 or 78 bits. Remember that $430 trillion cost? Now multiply that by 64.
A sequence of six words, such as “jeeringly kick unhappily numerate hash datebook”, are considerably easier for most people to remember than a random string of mixed-case characters and numbers such as “r646STqmChaa”. This page makes it easy for you to select random words of different lengths and give you a choice of lists to choose from with different features. You can read about it. To trust it, you need only verify that it generates each sequence locally in your browser and does not store or transmit each result it generates.
(You also have to trust the pseudo-random number generator your browser uses to make each selection. Pseudo-random number generators are beyond the scope of this discussion, but suffice it to say that to be effective, it doesn’t have to be truly random; it just has to be unbiased enough to make it exceedingly hard for an attacker to predict and exploit. Needless to say, the seed, or starting value, of the random sequence must not be predictable.)
You will probably need to write down each passphrase you use and keep it in a safe place until you have memorized it. Writing anything down means it becomes available to anyone who can find it and knows how to use it, but without that you run the risk of permanently locking yourself out of all your protected passwords should you ever forget it. Yet another trade-off between security and convenience.
Some journalists have cited quantum computing as a game-changer in encryption technology. Quantum computing has not yet gone beyond proof-of-concept stage in lab settings that can cool a circuit down to near absolute zero. It remains speculative if and when it will ever become practical and affordable on a mass-scale. If it does, it will pose a threat to privacy only during the period when it becomes feasible but too costly for all but governments and wealthy entities. When and if it does become affordable and ubiquitous like cell-phones, we must assume that it will be used to encrypt communication, removing the advantage to crackers with quantum technology.
Note that the above only deals with a brute-force attack on your encrypted data. There are other ways for criminals to get at your data that do not require GDP sized computing budgets. The simplest and most effective way being to point a gun at your head or your loved one’s head and demand your password. Less violent ways might involve phishing and keystroke logging. There have been news reports of victims being induced to give criminals access to their bank accounts while they are under the influence of the drug scopolamine. Encryption cannot protect you from a threat or physical attack on your person.
Two-factor authentication (2FA) can provide some additional protection for your data against phishing and keystroke logging because an attacker also needs your physical security device to obtain an additional code. You still need to hang onto your security device and assume it has not been compromise. Two-factor also makes it more likely for you get locked out of your accounts if you don’t take a few precautions, like copying down recovery keys and keeping them in a safe place you can get to.
You might also want to consider the cost to an attacker of repeated guesses of your password in different situations. If the attacker has seized your encrypted hard drive, the number of attempts per second is limited only by his computing budget. But for someone parked across the street from you in a white van trying to guess your Wifi password, most Wifi routers just take too long to process each password attempt. This limits him to at most one or two attempts per minute, and adding additional processing power to his arsenal will be of no use to him. So your Wifi password probably doesn’t need 100 bits of entropy for your Wifi to be safe.
On the other hand, all bets are off again if an attacker can surreptitiously enter your home or business, gain access to your Wifi router, copy its memory to his own device, and cover his tracks so you remain unaware of the breach. Then he can take it back to his computing lab and make as many attempts per second as his computing resources will allow. CBS “60 Minutes” did a segment on an FBI “search and entry” team that specializes in surreptitious entry for gathering evidence. Of course, this requires a judge to sign a warrant, and a senior manager has to sign off on covering the cost of such a team. Unless you’re a high-profile criminal, you’re probably not worth it.
The same goes for your e-mail account. Your email provider will only allow so many attempts per minute, whether by policy, or by the inherent slowness of the online authentication process over a remote connection, but if an attacker can steal their account database with encrypted data and take it back to their lab, again they are limited only by their own computing budget. Mass breaches of raw encrypted data have been known to happen, and some providers have not been immediately forthcoming to let their customers know of the breach so they can change their passwords in time. So I treat my email with the same care I would treat my bank account.
How secure is your sh*t really? Security is never absolute, but if you can take the time to master a few concepts like bit-entropy and tools, like a password vault secured with a secure passphrase, and two-factor authentication, you can significantly ratchet up the security of your sensitive digial data.
]]>Every SSL certificate contains an issuer field that points to another certificate needed to verify the primary certificate. Often, the issuer certificate is a root certificate, a self-signed certificate issued by a known and trusted certificate authority. Popular web browsers may be shipped with their own collections of trusted root certificates, or they may depend on a collection of certificates stored in a restricted directory in the local filesystem.
When my project manager requested the certicate from the CA customer web form, the form asked whether the certificate was intented to be used only within our company’s domain, or in the wild. He checked off that it would be used only within the company’s domain. In response, the CA mailed him a primary certificate to be returned by each server response and an issuer certificate to be distrubed to clients using the web service for verifying the primary certificate returned by each response. We’ll refer to this certificate as Issuer.crt.
I created the web server with unit tests, then wrote a small client application in Python with example requests. The Python library for HTTP requests is called (drum roll …) requests. requests depends on the certifi library that bundles a collection of root certificates in one file. Each HTTP request takes an optional named parameter, verify
, that allows me to provide it with an alternate directory or file of root certificates. requests
also provides a Session object that can be configured with parameters including verify, then re-used for multiple requests.
When my example does:
s = requests.Session();
s.verify = Issuer.crt;
s.get('https://my-web-service');
s.get()
throws an SSLError exception. The reason is that Issuer.crt is not a root certificate. It too, refers to an issuer, which in this case is a CA root certificate, we’ll call CaRoot.crt. CaRoot.crt is in the collection provided by certifi
, but requests
will only search one collection, and by setting the verify
parameter, I pointed it at a collection with one certificate, Issuer.crt.
Some browsers and http libraries will use the Authority Information Access (AIA) Extension. to attempt to resolve references to issuer certificates not already pre-installed in their collection by downloading them dynamically. Note that this introduces yet a new security vulnerability. Each new web request and response must itself be verified with SSL. This can result in a circular chain that can only be stopped by a timeout.
The authors of the Python requests library chose not make use of the AIA extension. If requests can’t resolve the chain of trust with whatever certificate collection it has been given, it will throw an SSLError exception.
The only way I could get the requests library to successfuly use my web service was to append Issuer.crt to the collection of root certificates provided by certifi. This is not a permanent solution, as it would require every user of my web service to do the same, and the same modification would have to be made again each time a user updates the certifi
library
on his local machine.
For due diligence, I wrote another example client app in Perl that send requests to the same web service with the Perl Rest::Client library. Rest::Client
had no problem completing HTTP requests to my web service. Nevertheless, my project manager and others who may use this web service are already using Python and I see no reason they should not be able to do all their work with whatever scripting language they prefer.
In response, I created the cert_session Python package.
]]>My app lets you search for movies by title. The Open Movie Database API will return only ten titles at a time. So my first attempt filled a list with the first ten titles from a query, then rendered a set of pager buttons at the end of the list. The user could press the “Next” button to request another ten titles. Once the next set was displayed, the user could continue forward or go backward. There were also buttons to fast-forward to the last page, or rewind to the first page. The result looked like this:
Because a list of ten film titles wouldn’t fit on my phone’s screen after the header, I wrapped the entire screen in a ScrollView component. Since I was just getting started and trying to make sure everything worked, I did not even bother to request enough titles to fill a screen on a larger device, such as a tablet.
Adam Navarro, from the Expo support community, saw my app in the Expo app database when I asked a question, and suggested that I replace the pager buttons with a continuously refreshing, or “infinite”, scrolling list. In other words, when the user enters a new query, the app makes enough requests to fill the screen on whatever device its on, then makes additional requests whenever the user attempts to scroll past the end of what’s already been loaded, until all available results are exhausted. Adam pointed out that I could achieve this easily with the React Native FlatList component.
The FlatList was surprisingly easy to use. At the minimum, you give it an array of items, and a method or component to render each item.
<FlatList data={films} renderItem={renderItem} />
Optionally, you can pass it a keyExtractor
method that will tell it how to obtain a unique key for each list item. Otherwise, your renderItem
method or component will have to do that. Since the key
property is used only for React bookkeeping and has no bearing on how each list item actually gets rendered, I chose to pass a keyExtractor
property to handle it that would be separate from the renderItem
method. That’s just how I understand Separation of Concerns. So my FlatList now looks like this:
<FlatList data={films} renderItem={renderItem} keyExtractor={item => item.imdbID} />
But there is more goodness in a FlatList. It also takes a ListHeaderComponent, and a ListiFooterComponent property. I assigned it a header component that renders a search bar and displays a little message about how many items were found. I assigned it a footer that shows an animated spinner, or “activity indicator” graphic, whenever the app is loading more items to append the the end of the list, if visible. Otherwise, the footer remains empty. The code now looks like this:
<FlatList data={films} renderItem={renderItem} keyExtractor={item => item.imdbID} ListHeaderComponent={<Header totalResults={totalResults} />} ListFooterComponent={<Footer />} />
The result looks like this:
Looking a little better, I think!
But I wasn’t done yet. When the user scrolls to the end of the first ten items it has already fetched and rendered, we want the app to go back more more. dispatchFetchPage()
does just that. It makes a remote request to omdbapi.com to fetch ten more film titles from the original query. If all goes well, a response comes back with ten more titles that we append to the list. So the code now looks like this:
<FlatList data={films} renderItem={renderItem} keyExtractor={item => item.imdbID} ListHeaderComponent={<Header totalResults={totalResults} />} ListFooterComponent={} onEndReached={() => dispatchFetchPage()} />
So now whenever the user scrolls to the end of the current list, it will automatically fetch and append more items to the list, as long as there are more items to fetch. In fact, some console.log()
messages showed that by default the FlatList will invoke onEndReached()
several times after it has fetched the first batch of results, and several more times each time the last item scrolls into view. In fact, it seems to begin fetching additional pages even before the last item is visible when it is within a certain distance of being visible.
What if all doesn’t go well? Then the app will display a pop-up with a message like “Unable to fetch data. Try a different title or try again later.” I got it covered!
At this point I expected the end-user experience to be seamless, but I still had work ahead of me. There was a serpent in the garden.
Each list item responds to a press by the user. When the user presses, the app should fetch and display details for the corresponding film, like when it was made, who directed it, who acted in it, etc. But while the app was busy fetching several pages ahead, the app ignored presses for several long seconds, not only on the list items, but also on the back key and the Info icon. I call this The Silent Treatment.
I had trouble understanding why, because once the app makes a remote request, it returns immediately and will end its current cycle and wait for another event, such as a press on a UI component that has already been rendered and is listening for presses. Then I generated console.log()
messages from the list item render method and I began to notice that the app was re-rendering every item the list each time it added ten new items. In other words, on the first fetch it rendered 10 items, on the next fetch it rendered 20 items, and so on.
This accounted for the Silent Treatment. React Native runs your code in one thread, while the native UI code runs in a separate thread. They communicate over a pair of incoming and outgoing message queues called the bridge. When the user presses an touchable UI component (any component that is listening for press events), the UI thread responds by enqueueing a message to the JavaScript thread. But that JavaScript thread is non-preemptive. If it is still busy re-rendering a growing number of list items from the top each time, it will stop what its doing to check its incoming queue for new events until it has finished. So the end user keeps pressing and nothing happens for several seconds until the JavaScript thread has finished its work.
But why was the FlatList re-rendering every item in the list from the beginning each time the app appending new items? I had more work to do.
The app uses the Redux store to manage global state variables such as the list of films to be displayed. The Redux documentation is quite clear about the virtue of making the values it manages immutable. “Immutable” in this context means that if your app needs to update a property of an object in the Redux store, it should replace the whole object, not just a single property. Otherwise, if that object is a property ‘p’ of a component, React will compare p to its previous value with ‘===’ (identity comparison), see that p is still a reference to the same object, and will not bother to update re-render that component. This is how React seeks to optimize performance and avoid unnecessarily re-rendering the same components.
So in the stereotypical use-case where we have a list of items to render and we’ve added more items, instead of
newItems.forEach(newItem => list.push(newItem)) return list
our reducer method needs to do this:
return [ ...list, ...newItems ]
In other words, this returns a new array that will be !== list
. Note that the original items themselves have not changed. When it is time to test for these items need to be re-rendered, React should see that they are still identical to their previous values and can skip having to re-render them again.
(For those who haven’t yet been exposed to the new ES6 syntax, read this to understand what those ‘…’ ellipses mean.)
But FlatList doesn’t play that way. FlatList is already optimized. It extends VirtualizedList which maintains a finite render window and replaces items that are not visible with blank space in the DOM to preserve memory. The bottom line is that we can let the list itself be mutable. We just don’t want the items in the list to mutate unless there is really something that needs to be rendered differently.
In this case, once the app fetches film titles, each title and its associated data doesn’t change. Reach will render its the virtual DOM for each list item only once and will replace UI DOM items that are scrolled out of visibility with blank space. When it appends additional items it has fetched, it will render only those and not bother to re-render the items that were already in the list.
But if we write our reducer method as I did to return a new immutable list each time it has new items to append, as in
return [ ...list, ...newItems ]
React will notice that the list iteslf is a reference to a different instance or object and will re-render every item from the beginning of the list. As the length of the list increases linearly with each fetch, the so do the number of items to re-render each time it appends new results. The Silent Treatments just get longer and longer from the end user’s point of view.
So the reducer method needs to mutate the list of films as in this example:
newItems.forEach(newItem => list.push(newItem)) return list
When the user enters a new search time and it returns the initial 10 results of a new request, the app starts a new list that is !== to the old one.
return [ ...newItems ]
, so this time React does render the list from the beginning as it should.
The fix was easy, but it was also exactly the opposite of how I thought I’m supposed to write reducer methods for the Redux store. Maybe the doc page for FlatList and VirtualList could be a little clearer on that. I had read them both several times, but I think they could have made this clearer. I am thinking of submitting a pull-request to make this more obvious. We’ll see whether it gets accepted, or whether they tell me I’m just a dunce and should have read more carefully!
Now things worked much better, but I found some more properties that help optimize the end-user experience. The code now looks like this:
<FlatList data={films} renderItem={renderItem} keyExtractor={item => item.imdbID} ListHeaderComponent={<Header totalResults={totalResults} />} ListFooterComponent={} onEndReached={() => dispatchFetchPage()} initialNumToRender={8} maxToRenderPerBatch={2} onEndReachedThreshold={0.5} />
The purpose of initialNumToRender
and maxToRenderPerBatch
should be obvious. The value of 0.5 for onEndReachedThreshold
means that when the last item in the data is within one half of a screen-full of items from being visible, the app should fetch more items. In other words, the app can show about eight items at a time on my phone, so when there are four items left to scroll, the app should start fetching another page. There is a trade-off between fetching ahead enough to keep things moving as the user scrolls, but not so much as to overburden the app so that it will not appear to ignore UI events like presses while it is rendering new items.
There is still a slight lag when the app renders the first page of results until items will respond to presses, but I think that it is now less than a second. There is no search button, the app begins requesting titles 300ms after the user pauses or stops typing. Therefore, a lag of less than a second before the UI will respond to screen presses should not be an issue for all but those with very fast fingers. I did not set any timers to check the actual lag because I’m more concerned with the perception of UI responsiveness than the actual value in milliseconds. If it feels slow, then it is slow.
Subsequent fetches do not seem to produce any lag at all, so I think I achieved my goal of providing a smooth UX/UI experience on the Search Results page.
The FlatList component was announced in March of 2017 so it has been around for a little over a year at the time of this writing. It is a very powerful component for rendering large lists of items efficiently, but it’s also complicated. It takes all the same properties as the VirtualizedList and the ScrollList combined, plus it adds a dozen or so of its own properties, so it probably takes at least 50-60 properties. It takes some experience and knowledge to use it effectively for a smooth UI experience.
In particular, if its data property comes from the Redux store, you want to be sure not to replace that list each time you have more items to append to it, because doing so will defeat the FlatList’s ability to render only new items in the React Virtual DOM. Instead, as the FlatList and VirtualList documentation suggests, make sure that the list items are immutable and that the list-item component extends React.PureComponent which is a contract that promises that the rendered state of a component depends only on a shallow comparison of its properties.
]]>I took an online course several years ago in which I was guided through the task of writing a small app with the Android SDK, but I was hoping to never have to read or write Java again. I also wasn’t thrilled about having to write the app twice, once for Android, and again for iOS.
Then I learned that React Native allows one to write an app that will run on both iOS and Android from a single code-base. (You can still write custom code for each platform where necessary, if you have to.) So I could at least write the mobile part in Javascript with React Native and save myself from having to write the same app twice, once for the Android NDK and again for the iOS SDK.
Android and iOS mobile devices don’t run JavaScript in a browser-like environment with event-handling and rendering APIs. Android doesn’t even a Javascript interpreter installed by default. In order to pull this off, a React Native app first has to bootstrap code that will run the Javascript part in one thread, and run the native rendering and event processing code, “The Root View”, in a separate thread called the UI Thread. There is also a UI Thread running custom native code. Nicalaas Couvrat wrote an excellent in-depth article on what exactly goes on when you start a React Native app. This article goes into even more detail about native modules, threads, and the bridge. The takeaway is that there is some heavy lifting going on behind the scenes before a React Native app can even render “Hello World”.
It would be a daunting task to configure this kind of build from scratch. Fortunately, Facebook published the open-source create-react-native-app (CRNA) project that configures a boiler-plate app for you. You install the project globally with npm or yarn, then run “create-react-native-app” from your shell, answer a few questions. When it’s finished, you have a minimal app with one screen with a few lines of text to remind you that you might have more work to do.
CRNA uses the Expo API to allow your app to run on both Android and iOS. First you download the Expo client from the Google Play store or Apple store and install it on your device. Next, you start up a packaging server on your desktop by running “yarn start” or “npm run start”. The packaging server prints a QR code on the console along with the local URI of your app’s manifest. In my case, that looked like http://192.168.1.x:19000. Finally, you load your app into Expo by pointing it at the QR code on your screen. (Expo needs permission to use your camera to do that. Curiously, I couldn’t find a way to type in the URL directly.)
On my Nexus tablet, live reloading worked great. But on my Moto z2 Play, even if I backed out of my app to the Expo client and clicked on the link again, my app just opened where it was on the last screen without reloading. I had to keep shaking my phone and hit the “Reload” item on the developer menu.
After I ran this command, the script reported that my app is available at https://expo.io/@lsiden/omdb-film-browser. Now, anyone can go to that URL to download it and install it on their Expo client. You can chose whether or not you want your project to be listed or unlisted. Even if you make it unlisted, anyone with the URL can access it.
OK, so you got your app to look good and work in the Expo client. Your app is still on training wheels. You want people to be able to find and install your app directly from the Google Play Store without having to install Expo first. When you created your app with ‘create-react-native-app’, it installed the package.json tasks “build:android” and “build:ios”. This builds your APK or iOS binary on the Expo server. It takes time so get a cup of coffee. When it’s done, the script prints the URI of your APK or iOS package which you then downloaded to your desktop.
If you don’t save this link or download it right away, you can still find it by going to https://expo.io/builds. You may have to sign in again. You will see a history of all your builds and a link to download or view the log of each build. However, don’t expect to find a link to this page anywhere on the Expo site. I couldn’t.
Once I had the APK, I put it in my Dropbox account, then downloaded it to my phone, found it on my phone with my file manager app and installed it. Do do this this you’ll have to give your OS permission to install apps from “unknown” sources, meaning sources other than the Google Play Store. Once installed, it worked exactly as it had in the Expo client, only it loaded faster.
I don’t own an iPhone so I’ll have to leave testing on iOS to a later date.
So now I have an APK and verified that it works on my phone, but I want to get it tested on more devices before I announce it on my social media feeds. Also, I don’t expect my app’s users to download the APK sideload Fortunately, the App Store makes this easy with a beta-testing program. First you sign onto Google and go to the Google Play Store app publishing pages.
I found The Play Store app-publishing section to be confusing at first. After filling out several forms, I found the page to upload my APK. Once I got it uploaded, it did not say anywhere that I would have to wait for my app to be reviewed before it could be published even in beta, nor how I would be contacted once it had been approved.
I had to resort to a live chat with the on-line help desk. The rep told me that I had to go to the Settings section and opt in to be notified. I would have expected to have to ask to be notified when my app gets approved. When I search in my mailbox now, I can’t find any email that notified me, so I think I had to go back to the Play Store to check. Fortunately, it did not take that long.
Now I had to look for the link I could distribute to beta-testers. To find it, I had to go into the Release Management tab, click on Manage Beta, then open the Manage testers folder. There is was, at the bottom of the section, labeled “Opt-in URL”.
It turns out that unless your app prints money people aren’t going to flock to beta-test it for you. I searched for “beta-test Android apps” and found this sub-Reddit titled AndroidAppTesters where people post links to apps that they would like to have tested but it doesn’t seem to be very active.
Google has an interesting service called Firebase where you can upload your APK and have it tested automatically for a fee on an array devices for a reasonable fee. It says you can test it on up to five real devices and 10 virtual devices for free, but I could only get the test to work when I selected one device. I got back a few screen shots with a report that nothing crashed. To get past my home page, I would have to load my app into the Android Dev Studio and create a testing script to upload with my APK so that the test engine would type into the search bar and click on a result. While I’m sure this would be highly useful for a commercial app, my app is really for demo, so I prefer to have feedback from human beings who volunteer to load and run my app. I may have to post a small bounty to find a reasonable number of volunteers for this.
That’s as far as I’ve gotten so far. Here are a few screen shots:
In the next few days, I plan to publish some more post that describe some of the issues I encountered while building this app, and how I overcame them. In particular, I plan to cover these topics:
const path=require('path')
const fs=require('fs-extra')
const {exec} = require('child_process')
const ghPagesPath = './gh-pages'
function toPromise(func) {
return function() {
const args = [].slice.call(arguments)
return new Promise(function(resolve, reject) {
args.push(function(err) {
err && reject() || resolve()
})
func.apply(null, args)
})
}
}
(function(dest) {
fs.remove(dest)
.then(function() {
return fs.mkdir(dest)
}).then(function() {
return Promise.all([
fs.copy('dist', path.join(dest, 'dist')),
fs.copy('./demo.html', path.join(dest, 'index.html')),
toPromise(exec)('yarn babel demo.jsx -o ' + path.join(dest, 'index.js')),
])
}).catch(function(err) {
console.error(err)
})
})(ghPagesPath)
Where as Plugins work at bundle or chunk level and usually work at the end of the bundle generation process. And some Plugins like commonsChunkPlugins go even further and modify how the bundles themselves are created.Then there are resolvers that deal with actually resolving the flie-paths or URIs to resources.
"scripts": {
"test": "NODE_ENV=test jest --verbose",
"test:watch": "yarn test -- --watch",
...
}