1
0
Fork 0
mirror of https://github.com/shimataro/ssh-key-action.git synced 2025-06-19 22:52:10 +10:00

* first action! (#1)

This commit is contained in:
shimataro 2019-09-18 20:39:54 +09:00 committed by GitHub
parent 8deacc95b1
commit ace1e6a69a
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
3750 changed files with 1155519 additions and 0 deletions

15
node_modules/minipass/LICENSE generated vendored Normal file
View file

@ -0,0 +1,15 @@
The ISC License
Copyright (c) npm, Inc. and Contributors
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

205
node_modules/minipass/README.md generated vendored Normal file
View file

@ -0,0 +1,205 @@
# minipass
A _very_ minimal implementation of a [PassThrough
stream](https://nodejs.org/api/stream.html#stream_class_stream_passthrough)
[It's very
fast](https://docs.google.com/spreadsheets/d/1oObKSrVwLX_7Ut4Z6g3fZW-AX1j1-k6w-cDsrkaSbHM/edit#gid=0)
for objects, strings, and buffers.
Supports pipe()ing (including multi-pipe() and backpressure
transmission), buffering data until either a `data` event handler or
`pipe()` is added (so you don't lose the first chunk), and most other
cases where PassThrough is a good idea.
There is a `read()` method, but it's much more efficient to consume
data from this stream via `'data'` events or by calling `pipe()` into
some other stream. Calling `read()` requires the buffer to be
flattened in some cases, which requires copying memory.
There is also no `unpipe()` method. Once you start piping, there is
no stopping it!
If you set `objectMode: true` in the options, then whatever is written
will be emitted. Otherwise, it'll do a minimal amount of Buffer
copying to ensure proper Streams semantics when `read(n)` is called.
This is not a `through` or `through2` stream. It doesn't transform
the data, it just passes it right through. If you want to transform
the data, extend the class, and override the `write()` method. Once
you're done transforming the data however you want, call
`super.write()` with the transform output.
For some examples of streams that extend MiniPass in various ways, check
out:
- [minizlib](http://npm.im/minizlib)
- [fs-minipass](http://npm.im/fs-minipass)
- [tar](http://npm.im/tar)
- [minipass-collect](http://npm.im/minipass-collect)
- [minipass-flush](http://npm.im/minipass-flush)
- [minipass-pipeline](http://npm.im/minipass-pipeline)
- [tap](http://npm.im/tap)
- [tap-parser](http://npm.im/tap)
- [treport](http://npm.im/tap)
## Differences from Node.js Streams
There are several things that make Minipass streams different from (and in
some ways superior to) Node.js core streams.
### Timing
Minipass streams are designed to support synchronous use-cases. Thus, data
is emitted as soon as it is available, always. It is buffered until read,
but no longer. Another way to look at it is that Minipass streams are
exactly as synchronous as the logic that writes into them.
This can be surprising if your code relies on `PassThrough.write()` always
providing data on the next tick rather than the current one, or being able
to call `resume()` and not have the entire buffer disappear immediately.
However, without this synchronicity guarantee, there would be no way for
Minipass to achieve the speeds it does, or support the synchronous use
cases that it does. Simply put, waiting takes time.
This non-deferring approach makes Minipass streams much easier to reason
about, especially in the context of Promises and other flow-control
mechanisms.
### No High/Low Water Marks
Node.js core streams will optimistically fill up a buffer, returning `true`
on all writes until the limit is hit, even if the data has nowhere to go.
Then, they will not attempt to draw more data in until the buffer size dips
below a minimum value.
Minipass streams are much simpler. The `write()` method will return `true`
if the data has somewhere to go (which is to say, given the timing
guarantees, that the data is already there by the time `write()` returns).
If the data has nowhere to go, then `write()` returns false, and the data
sits in a buffer, to be drained out immediately as soon as anyone consumes
it.
### Emit `end` When Asked
If you do `stream.on('end', someFunction)`, and the stream has already
emitted `end`, then it will emit it again.
To prevent calling handlers multiple times who would not expect multiple
ends to occur, all listeners are removed from the `'end'` event whenever it
is emitted.
## USAGE
```js
const MiniPass = require('minipass')
const mp = new MiniPass(options) // optional: { encoding }
mp.write('foo')
mp.pipe(someOtherStream)
mp.end('bar')
```
### simple "are you done yet" promise
```js
mp.promise().then(() => {
// stream is finished
}, er => {
// stream emitted an error
})
```
### collecting
```js
mp.collect().then(all => {
// all is an array of all the data emitted
// encoding is supported in this case, so
// so the result will be a collection of strings if
// an encoding is specified, or buffers/objects if not.
//
// In an async function, you may do
// const data = await stream.collect()
})
```
### collecting into a single blob
This is a bit slower because it concatenates the data into one chunk for
you, but if you're going to do it yourself anyway, it's convenient this
way:
```js
mp.concat().then(onebigchunk => {
// onebigchunk is a string if the stream
// had an encoding set, or a buffer otherwise.
})
```
### iteration
You can iterate over streams synchronously or asynchronously in
platforms that support it.
Synchronous iteration will end when the currently available data is
consumed, even if the `end` event has not been reached. In string and
buffer mode, the data is concatenated, so unless multiple writes are
occurring in the same tick as the `read()`, sync iteration loops will
generally only have a single iteration.
To consume chunks in this way exactly as they have been written, with
no flattening, create the stream with the `{ objectMode: true }`
option.
```js
const mp = new Minipass({ objectMode: true })
mp.write('a')
mp.write('b')
for (let letter of mp) {
console.log(letter) // a, b
}
mp.write('c')
mp.write('d')
for (let letter of mp) {
console.log(letter) // c, d
}
mp.write('e')
mp.end()
for (let letter of mp) {
console.log(letter) // e
}
for (let letter of mp) {
console.log(letter) // nothing
}
```
Asynchronous iteration will continue until the end event is reached,
consuming all of the data.
```js
const mp = new Minipass({ encoding: 'utf8' })
// some source of some data
let i = 5
const inter = setInterval(() => {
if (i --> 0)
mp.write(Buffer.from('foo\n', 'utf8'))
else {
mp.end()
clearInterval(inter)
}
}, 100)
// consume the data with asynchronous iteration
async function consume () {
for await (let chunk of mp) {
console.log(chunk)
}
return 'ok'
}
consume().then(res => console.log(res))
// logs `foo\n` 5 times, and then `ok`
```

424
node_modules/minipass/index.js generated vendored Normal file
View file

@ -0,0 +1,424 @@
'use strict'
const EE = require('events')
const Yallist = require('yallist')
const EOF = Symbol('EOF')
const MAYBE_EMIT_END = Symbol('maybeEmitEnd')
const EMITTED_END = Symbol('emittedEnd')
const EMITTING_END = Symbol('emittingEnd')
const CLOSED = Symbol('closed')
const READ = Symbol('read')
const FLUSH = Symbol('flush')
const doIter = process.env._MP_NO_ITERATOR_SYMBOLS_ !== '1'
const ASYNCITERATOR = doIter && Symbol.asyncIterator || Symbol('asyncIterator not implemented')
const ITERATOR = doIter && Symbol.iterator || Symbol('iterator not implemented')
const FLUSHCHUNK = Symbol('flushChunk')
const SD = require('string_decoder').StringDecoder
const ENCODING = Symbol('encoding')
const DECODER = Symbol('decoder')
const FLOWING = Symbol('flowing')
const PAUSED = Symbol('paused')
const PAUSE_TRUE = Symbol('pauseTrue')
const PAUSE_FALSE = Symbol('pauseFalse')
const RESUME = Symbol('resume')
const BUFFERLENGTH = Symbol('bufferLength')
const BUFFERPUSH = Symbol('bufferPush')
const BUFFERSHIFT = Symbol('bufferShift')
const OBJECTMODE = Symbol('objectMode')
// Buffer in node 4.x < 4.5.0 doesn't have working Buffer.from
// or Buffer.alloc, and Buffer in node 10 deprecated the ctor.
// .M, this is fine .\^/M..
let B = Buffer
/* istanbul ignore next */
if (!B.alloc) {
B = require('safe-buffer').Buffer
}
// events that mean 'the stream is over'
// these are treated specially, and re-emitted
// if they are listened for after emitting.
const isEndish = ev =>
ev === 'end' ||
ev === 'finish' ||
ev === 'prefinish'
module.exports = class MiniPass extends EE {
constructor (options) {
super()
this[FLOWING] = false
// whether we're explicitly paused
this[PAUSED] = PAUSE_FALSE
this.pipes = new Yallist()
this.buffer = new Yallist()
this[OBJECTMODE] = options && options.objectMode || false
if (this[OBJECTMODE])
this[ENCODING] = null
else
this[ENCODING] = options && options.encoding || null
if (this[ENCODING] === 'buffer')
this[ENCODING] = null
this[DECODER] = this[ENCODING] ? new SD(this[ENCODING]) : null
this[EOF] = false
this[EMITTED_END] = false
this[EMITTING_END] = false
this[CLOSED] = false
this.writable = true
this.readable = true
this[BUFFERLENGTH] = 0
}
get bufferLength () { return this[BUFFERLENGTH] }
get encoding () { return this[ENCODING] }
set encoding (enc) {
if (this[OBJECTMODE])
throw new Error('cannot set encoding in objectMode')
if (this[ENCODING] && enc !== this[ENCODING] &&
(this[DECODER] && this[DECODER].lastNeed || this[BUFFERLENGTH]))
throw new Error('cannot change encoding')
if (this[ENCODING] !== enc) {
this[DECODER] = enc ? new SD(enc) : null
if (this.buffer.length)
this.buffer = this.buffer.map(chunk => this[DECODER].write(chunk))
}
this[ENCODING] = enc
}
setEncoding (enc) {
this.encoding = enc
}
write (chunk, encoding, cb) {
if (this[EOF])
throw new Error('write after end')
if (typeof encoding === 'function')
cb = encoding, encoding = 'utf8'
if (!encoding)
encoding = 'utf8'
// fast-path writing strings of same encoding to a stream with
// an empty buffer, skipping the buffer/decoder dance
if (typeof chunk === 'string' && !this[OBJECTMODE] &&
// unless it is a string already ready for us to use
!(encoding === this[ENCODING] && !this[DECODER].lastNeed)) {
chunk = B.from(chunk, encoding)
}
if (B.isBuffer(chunk) && this[ENCODING])
chunk = this[DECODER].write(chunk)
try {
return this.flowing
? (this.emit('data', chunk), this.flowing)
: (this[BUFFERPUSH](chunk), false)
} finally {
this.emit('readable')
if (cb)
cb()
}
}
read (n) {
try {
if (this[BUFFERLENGTH] === 0 || n === 0 || n > this[BUFFERLENGTH])
return null
if (this[OBJECTMODE])
n = null
if (this.buffer.length > 1 && !this[OBJECTMODE]) {
if (this.encoding)
this.buffer = new Yallist([
Array.from(this.buffer).join('')
])
else
this.buffer = new Yallist([
B.concat(Array.from(this.buffer), this[BUFFERLENGTH])
])
}
return this[READ](n || null, this.buffer.head.value)
} finally {
this[MAYBE_EMIT_END]()
}
}
[READ] (n, chunk) {
if (n === chunk.length || n === null)
this[BUFFERSHIFT]()
else {
this.buffer.head.value = chunk.slice(n)
chunk = chunk.slice(0, n)
this[BUFFERLENGTH] -= n
}
this.emit('data', chunk)
if (!this.buffer.length && !this[EOF])
this.emit('drain')
return chunk
}
end (chunk, encoding, cb) {
if (typeof chunk === 'function')
cb = chunk, chunk = null
if (typeof encoding === 'function')
cb = encoding, encoding = 'utf8'
if (chunk)
this.write(chunk, encoding)
if (cb)
this.once('end', cb)
this[EOF] = true
this.writable = false
// if we haven't written anything, then go ahead and emit,
// even if we're not reading.
// we'll re-emit if a new 'end' listener is added anyway.
// This makes MP more suitable to write-only use cases.
if (this.flowing || this[PAUSED] !== PAUSE_TRUE)
this[MAYBE_EMIT_END]()
return this
}
// don't let the internal resume be overwritten
[RESUME] () {
this[PAUSED] = PAUSE_FALSE
this[FLOWING] = true
this.emit('resume')
if (this.buffer.length)
this[FLUSH]()
else if (this[EOF])
this[MAYBE_EMIT_END]()
else
this.emit('drain')
}
resume () {
return this[RESUME]()
}
pause () {
this[FLOWING] = false
this[PAUSED] = PAUSE_TRUE
}
get flowing () {
return this[FLOWING]
}
[BUFFERPUSH] (chunk) {
if (this[OBJECTMODE])
this[BUFFERLENGTH] += 1
else
this[BUFFERLENGTH] += chunk.length
return this.buffer.push(chunk)
}
[BUFFERSHIFT] () {
if (this.buffer.length) {
if (this[OBJECTMODE])
this[BUFFERLENGTH] -= 1
else
this[BUFFERLENGTH] -= this.buffer.head.value.length
}
return this.buffer.shift()
}
[FLUSH] () {
do {} while (this[FLUSHCHUNK](this[BUFFERSHIFT]()))
if (!this.buffer.length && !this[EOF])
this.emit('drain')
}
[FLUSHCHUNK] (chunk) {
return chunk ? (this.emit('data', chunk), this.flowing) : false
}
pipe (dest, opts) {
const ended = this[EMITTED_END]
opts = opts || {}
if (dest === process.stdout || dest === process.stderr)
opts.end = false
else
opts.end = opts.end !== false
const p = { dest: dest, opts: opts, ondrain: _ => this[RESUME]() }
this.pipes.push(p)
dest.on('drain', p.ondrain)
this[RESUME]()
// piping an ended stream ends immediately
if (ended && p.opts.end)
p.dest.end()
return dest
}
addListener (ev, fn) {
return this.on(ev, fn)
}
on (ev, fn) {
try {
return super.on(ev, fn)
} finally {
if (ev === 'data' && !this.pipes.length && !this.flowing)
this[RESUME]()
else if (isEndish(ev) && this[EMITTED_END]) {
super.emit(ev)
this.removeAllListeners(ev)
}
}
}
get emittedEnd () {
return this[EMITTED_END]
}
[MAYBE_EMIT_END] () {
if (!this[EMITTING_END] &&
!this[EMITTED_END] &&
this.buffer.length === 0 &&
this[EOF]) {
this[EMITTING_END] = true
this.emit('end')
this.emit('prefinish')
this.emit('finish')
if (this[CLOSED])
this.emit('close')
this[EMITTING_END] = false
}
}
emit (ev, data) {
if (ev === 'data') {
if (!data)
return
if (this.pipes.length)
this.pipes.forEach(p =>
p.dest.write(data) === false && this.pause())
} else if (ev === 'end') {
// only actual end gets this treatment
if (this[EMITTED_END] === true)
return
this[EMITTED_END] = true
this.readable = false
if (this[DECODER]) {
data = this[DECODER].end()
if (data) {
this.pipes.forEach(p => p.dest.write(data))
super.emit('data', data)
}
}
this.pipes.forEach(p => {
p.dest.removeListener('drain', p.ondrain)
if (p.opts.end)
p.dest.end()
})
} else if (ev === 'close') {
this[CLOSED] = true
// don't emit close before 'end' and 'finish'
if (!this[EMITTED_END])
return
}
const args = new Array(arguments.length)
args[0] = ev
args[1] = data
if (arguments.length > 2) {
for (let i = 2; i < arguments.length; i++) {
args[i] = arguments[i]
}
}
try {
return super.emit.apply(this, args)
} finally {
if (!isEndish(ev))
this[MAYBE_EMIT_END]()
else
this.removeAllListeners(ev)
}
}
// const all = await stream.collect()
collect () {
const buf = []
this.on('data', c => buf.push(c))
return this.promise().then(() => buf)
}
// const data = await stream.concat()
concat () {
return this.collect().then(chunks =>
this[ENCODING] ? chunks.join('') : Buffer.concat(chunks))
}
// stream.promise().then(() => done, er => emitted error)
promise () {
return new Promise((resolve, reject) => {
this.on('end', () => resolve())
this.on('error', er => reject(er))
})
}
// for await (let chunk of stream)
[ASYNCITERATOR] () {
const next = () => {
const res = this.read()
if (res !== null)
return Promise.resolve({ done: false, value: res })
if (this[EOF])
return Promise.resolve({ done: true })
let resolve = null
let reject = null
const onerr = er => {
this.removeListener('data', ondata)
this.removeListener('end', onend)
reject(er)
}
const ondata = value => {
this.removeListener('error', onerr)
this.removeListener('end', onend)
this.pause()
resolve({ value: value, done: !!this[EOF] })
}
const onend = () => {
this.removeListener('error', onerr)
this.removeListener('data', ondata)
resolve({ done: true })
}
return new Promise((res, rej) => {
reject = rej
resolve = res
this.once('error', onerr)
this.once('end', onend)
this.once('data', ondata)
})
}
return { next }
}
// for (let chunk of stream)
[ITERATOR] () {
const next = () => {
const value = this.read()
const done = value === null
return { value, done }
}
return { next }
}
}

70
node_modules/minipass/package.json generated vendored Normal file
View file

@ -0,0 +1,70 @@
{
"_from": "minipass@^2.3.5",
"_id": "minipass@2.6.5",
"_inBundle": false,
"_integrity": "sha512-ewSKOPFH9blOLXx0YSE+mbrNMBFPS+11a2b03QZ+P4LVrUHW/GAlqeYC7DBknDyMWkHzrzTpDhUvy7MUxqyrPA==",
"_location": "/minipass",
"_phantomChildren": {},
"_requested": {
"type": "range",
"registry": true,
"raw": "minipass@^2.3.5",
"name": "minipass",
"escapedName": "minipass",
"rawSpec": "^2.3.5",
"saveSpec": null,
"fetchSpec": "^2.3.5"
},
"_requiredBy": [
"/fs-minipass",
"/minizlib",
"/pacote",
"/tar"
],
"_resolved": "https://registry.npmjs.org/minipass/-/minipass-2.6.5.tgz",
"_shasum": "1c245f9f2897f70fd4a219066261ce6c29f80b18",
"_spec": "minipass@^2.3.5",
"_where": "/home/shimataro/projects/actions/ssh-key-action/node_modules/pacote",
"author": {
"name": "Isaac Z. Schlueter",
"email": "i@izs.me",
"url": "http://blog.izs.me/"
},
"bugs": {
"url": "https://github.com/isaacs/minipass/issues"
},
"bundleDependencies": false,
"dependencies": {
"safe-buffer": "^5.1.2",
"yallist": "^3.0.0"
},
"deprecated": false,
"description": "minimal implementation of a PassThrough stream",
"devDependencies": {
"end-of-stream": "^1.4.0",
"tap": "^14.6.1",
"through2": "^2.0.3"
},
"files": [
"index.js"
],
"homepage": "https://github.com/isaacs/minipass#readme",
"keywords": [
"passthrough",
"stream"
],
"license": "ISC",
"main": "index.js",
"name": "minipass",
"repository": {
"type": "git",
"url": "git+https://github.com/isaacs/minipass.git"
},
"scripts": {
"postpublish": "git push origin --follow-tags",
"postversion": "npm publish",
"preversion": "npm test",
"test": "tap test/*.js --100"
},
"version": "2.6.5"
}