ZuInnoTe / hadoopcryptoledger

Hadoop Crypto Ledger - Analyzing CryptoLedgers, such as Bitcoin Blockchain, on Big Data platforms, such as Hadoop/Spark/Flink/Hive
Apache License 2.0
141 stars 51 forks source link

Negative gasPrice and gasLimit #46

Closed sanchopansa closed 6 years ago

sanchopansa commented 6 years ago

Hello and thank you for creating this very useful open source project!

I started using it yesterday and ran into a couple of issues, I am hoping you can point me in the right direction. I'm including here an example app for reproducing the problems.

My setup:

My Geth node is synced to the mainnet and following the advice here, I created multiple files for holding the exported blockchain (up until block number 5M).

Here's a snippet of the directory, with the file names using a slightly different nomenclature than the example, but still produced with geth export:

➜  eth-blockchain ls -lah
total 25G
drwxrwxr-x  2 sancho sancho 4.0K Feb 13 16:22 .
drwxr-xr-x 43 sancho sancho 4.0K Feb 13 16:22 ..
-rwxrwxr-x  1 sancho sancho 126M Feb 13 03:01 blocks-0-200000
-rwxrwxr-x  1 sancho sancho 240M Feb 13 03:32 blocks-1000001-1200000
-rwxrwxr-x  1 sancho sancho 274M Feb 13 03:33 blocks-1200001-1400000
-rwxrwxr-x  1 sancho sancho 299M Feb 13 03:33 blocks-1400001-1600000
-rwxrwxr-x  1 sancho sancho 307M Feb 13 03:40 blocks-1600001-1800000
-rwxrwxr-x  1 sancho sancho 290M Feb 13 03:40 blocks-1800001-2000000
-rwxrwxr-x  1 sancho sancho 301M Feb 13 03:41 blocks-2000001-2200000
-rwxrwxr-x  1 sancho sancho 148M Feb 13 03:02 blocks-200001-400000
-rwxrwxr-x  1 sancho sancho 332M Feb 13 03:41 blocks-2200001-2400000
...

The code for the app I'm using:

package analytics

import collection.JavaConverters._
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.io.BytesWritable
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import org.zuinnote.hadoop.ethereum.format.common.EthereumBlock
import org.zuinnote.hadoop.ethereum.format.mapreduce.EthereumBlockFileInputFormat

object TestApp {

  def main(args: Array[String]): Unit = {
    val sparkConf = new SparkConf()
      .setAppName("test-app")
      .setMaster("local[*]")

    val spark = SparkSession.builder
      .config(sparkConf)
      .getOrCreate()

    val hadoopConf = new Configuration()
    hadoopConf.set("hadoopcryptoledeger.ethereumblockinputformat.usedirectbuffer", "false")

    val rdd = spark.sparkContext.newAPIHadoopFile(
      "/home/sancho/eth-blockchain",
      classOf[EthereumBlockFileInputFormat], classOf[BytesWritable], classOf[EthereumBlock], hadoopConf
    )

    println("Number of transactions with negative gas price: " + rdd
      .flatMap(_._2.getEthereumTransactions.asScala)
      .filter(_.getGasPrice < 0)
      .count()
    )

    println("Number of transactions with negative gas limit: " + rdd
      .flatMap(_._2.getEthereumTransactions.asScala)
      .filter(_.getGasLimit < 0)
      .count()
    )

    val blockNumber = 4800251

    println(s"Number of transactions with negative gas price in block $blockNumber: " + rdd
        .filter(_._2.getEthereumBlockHeader.getNumber == blockNumber)
        .flatMap(_._2.getEthereumTransactions.asScala)
        .filter(_.getGasPrice < 0)
        .count()
    )

    println(s"Number of transactions with negative gas limit in block $blockNumber: " + rdd
      .filter(_._2.getEthereumBlockHeader.getNumber == blockNumber)
      .flatMap(_._2.getEthereumTransactions.asScala)
      .filter(_.getGasLimit < 0)
      .count()
    )
  }
}

This is the build.sbt file:

lazy val commonSettings = Seq(
  scalaVersion := "2.11.7",
  test in assembly := {}
)

lazy val ethBlockchainAnalytics = (project in file(".")).
  settings(commonSettings).
  settings(
    name := "EthBlockchainAnalytics",
    version := "0.1",
    libraryDependencies ++= Seq(
      "com.github.zuinnote" %% "spark-hadoopcryptoledger-ds" % "1.1.2",
      "org.apache.spark" %% "spark-core" % "2.2.1" % "provided",
      "org.apache.spark" %% "spark-sql" % "2.2.1" % "provided"),
    assemblyJarName in assembly := s"${name.value}_${scalaBinaryVersion.value}-${version.value}.jar",
    assemblyMergeStrategy in assembly := {
      case PathList("META-INF", xs@_*) => MergeStrategy.discard
      case PathList("javax", "servlet", xs@_*) => MergeStrategy.last
      case PathList("org", "apache", xs@_*) => MergeStrategy.last
      case x =>
        val oldStrategy = (assemblyMergeStrategy in assembly).value
        oldStrategy(x)
    }
  )

The launcher script I'm using:

#!/bin/sh

JAR=$1

/usr/local/lib/spark/bin/spark-submit \
    --class analytics.TestApp \
    --driver-memory 20G \
$JAR

And finally the command I'm using to run it:

➜  EthBlockchainAnalytics src/main/resources/launcher.sh /home/sancho/IdeaProjects/EthBlockchainAnalytics/target/scala-2.11/EthBlockchainAnalytics_2.11-0.1.jar

The output of the above application when run like this is:

Number of transactions with negative gas price: 8732263
Number of transactions with negative gas limit: 25699923
Number of transactions with negative gas price in block 4800251: 2
Number of transactions with negative gas limit in block 4800251: 8

As a quick sanity check, I ran the following:

➜  ~ geth attach
Welcome to the Geth JavaScript console!

instance: Geth/v1.7.3-stable-4bb3c89d/linux-amd64/go1.9
 modules: admin:1.0 debug:1.0 eth:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0

> var txs = eth.getBlock(4800251).transactions
undefined
> for (var i=0; i<txs.length; i++) { if (eth.getTransaction(txs[i]).gasPrice < 0) console.log(txs[i]) }
undefined

Any idea why I'm seeing so many negative gas prices and gas limits when using Spark?

jornfranke commented 6 years ago

Thx a lot for the details. I will look later into it with the same block. Have you checked how many transactions it finds with positive gas price?

Gas price is part of the unit tests and I am not aware that the format has changed.

From your geth console command it returns undefined - not sure they are correct

On 13. Feb 2018, at 15:55, Valentin Ganev notifications@github.com wrote:

Hello and thank you for creating this very useful open source project!

I started using it yesterday and ran into a couple of issues, I am hoping you can point me in the right direction. I'm including here an example app for reproducing the problems.

My setup:

Ubuntu 16.04 (64-bit) Scala 2.11.7 Spark 2.2.1 (Hadoop 2.7) running in local mode Geth 1.7.3-stable My Geth node is synced to the mainnet and following the advice here, I created multiple files for holding the exported blockchain (up until block number 5M).

Here's a snippet of the directory, with the file names using a slightly different nomenclature than the example, but still produced with geth export:

➜ eth-blockchain ls -lah total 25G drwxrwxr-x 2 sancho sancho 4.0K Feb 13 16:22 . drwxr-xr-x 43 sancho sancho 4.0K Feb 13 16:22 .. -rwxrwxr-x 1 sancho sancho 126M Feb 13 03:01 blocks-0-200000 -rwxrwxr-x 1 sancho sancho 240M Feb 13 03:32 blocks-1000001-1200000 -rwxrwxr-x 1 sancho sancho 274M Feb 13 03:33 blocks-1200001-1400000 -rwxrwxr-x 1 sancho sancho 299M Feb 13 03:33 blocks-1400001-1600000 -rwxrwxr-x 1 sancho sancho 307M Feb 13 03:40 blocks-1600001-1800000 -rwxrwxr-x 1 sancho sancho 290M Feb 13 03:40 blocks-1800001-2000000 -rwxrwxr-x 1 sancho sancho 301M Feb 13 03:41 blocks-2000001-2200000 -rwxrwxr-x 1 sancho sancho 148M Feb 13 03:02 blocks-200001-400000 -rwxrwxr-x 1 sancho sancho 332M Feb 13 03:41 blocks-2200001-2400000 ... The code for the app I'm using:

package analytics

import collection.JavaConverters._ import org.apache.hadoop.conf.Configuration import org.apache.hadoop.io.BytesWritable import org.apache.spark.SparkConf import org.apache.spark.sql.SparkSession import org.zuinnote.hadoop.ethereum.format.common.EthereumBlock import org.zuinnote.hadoop.ethereum.format.mapreduce.EthereumBlockFileInputFormat

object TestApp {

def main(args: Array[String]): Unit = { val sparkConf = new SparkConf() .setAppName("test-app") .setMaster("local[*]")

val spark = SparkSession.builder
  .config(sparkConf)
  .getOrCreate()

val hadoopConf = new Configuration()
hadoopConf.set("hadoopcryptoledeger.ethereumblockinputformat.usedirectbuffer", "false")

val rdd = spark.sparkContext.newAPIHadoopFile(
  "/home/sancho/eth-blockchain",
  classOf[EthereumBlockFileInputFormat], classOf[BytesWritable], classOf[EthereumBlock], hadoopConf
)

println("Number of transactions with negative gas price: " + rdd
  .flatMap(_._2.getEthereumTransactions.asScala)
  .filter(_.getGasPrice < 0)
  .count()
)
println("Number of transactions with negative gas limit: " + rdd
  .flatMap(_._2.getEthereumTransactions.asScala)
  .filter(_.getGasLimit < 0)
  .count()
)

val blockNumber = 4800251

println(s"Number of transactions with negative gas price in block $blockNumber: " + rdd
    .filter(_._2.getEthereumBlockHeader.getNumber == blockNumber)
    .flatMap(_._2.getEthereumTransactions.asScala)
    .filter(_.getGasPrice < 0)
    .count()
)

println(s"Number of transactions with negative gas limit in block $blockNumber: " + rdd
  .filter(_._2.getEthereumBlockHeader.getNumber == blockNumber)
  .flatMap(_._2.getEthereumTransactions.asScala)
  .filter(_.getGasLimit < 0)
  .count()
)

} } This is the build.sbt file:

lazy val commonSettings = Seq( scalaVersion := "2.11.7", test in assembly := {} )

lazy val ethBlockchainAnalytics = (project in file(".")). settings(commonSettings). settings( name := "EthBlockchainAnalytics", version := "0.1", libraryDependencies ++= Seq( "com.github.zuinnote" %% "spark-hadoopcryptoledger-ds" % "1.1.2", "org.apache.spark" %% "spark-core" % "2.2.1" % "provided", "org.apache.spark" %% "spark-sql" % "2.2.1" % "provided"), assemblyJarName in assembly := s"${name.value}${scalaBinaryVersion.value}-${version.value}.jar", assemblyMergeStrategy in assembly := { case PathList("META-INF", xs@) => MergeStrategy.discard case PathList("javax", "servlet", xs@_) => MergeStrategy.last case PathList("org", "apache", xs@_*) => MergeStrategy.last case x => val oldStrategy = (assemblyMergeStrategy in assembly).value oldStrategy(x) } ) The launcher script I'm using:

!/bin/sh

JAR=$1

/usr/local/lib/spark/bin/spark-submit \ --class analytics.TestApp \ --driver-memory 20G \ $JAR And finally the command I'm using to run it:

➜ EthBlockchainAnalytics src/main/resources/launcher.sh /home/sancho/IdeaProjects/EthBlockchainAnalytics/target/scala-2.11/EthBlockchainAnalytics_2.11-0.1.jar The output of the above application when run like this is:

Number of transactions with negative gas price: 8732263 Number of transactions with negative gas limit: 25699923 Number of transactions with negative gas price in block 4800251: 2 Number of transactions with negative gas limit in block 4800251: 8

As a quick sanity check, I ran the following:

➜ ~ geth attach Welcome to the Geth JavaScript console!

instance: Geth/v1.7.3-stable-4bb3c89d/linux-amd64/go1.9 modules: admin:1.0 debug:1.0 eth:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0

var txs = eth.getBlock(4800251).transactions undefined for (var i=0; i<txs.length; i++) { if (eth.getTransaction(txs[i]).gasPrice < 0) console.log(txs[i]) } undefined Any idea why I'm seeing so many negative gas prices and gas limits when using Spark?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

jornfranke commented 6 years ago

I think it is a bug in the library. We convert them to a Java datatype, but those are signed, but they should be unsigned (which does not exist in Java). I will prepare a test case for this and see if this is the case and eventually publish a fix this week if this is the case.

Thx for reporting.

On 13. Feb 2018, at 15:55, Valentin Ganev notifications@github.com wrote:

Hello and thank you for creating this very useful open source project!

I started using it yesterday and ran into a couple of issues, I am hoping you can point me in the right direction. I'm including here an example app for reproducing the problems.

My setup:

Ubuntu 16.04 (64-bit) Scala 2.11.7 Spark 2.2.1 (Hadoop 2.7) running in local mode Geth 1.7.3-stable My Geth node is synced to the mainnet and following the advice here, I created multiple files for holding the exported blockchain (up until block number 5M).

Here's a snippet of the directory, with the file names using a slightly different nomenclature than the example, but still produced with geth export:

➜ eth-blockchain ls -lah total 25G drwxrwxr-x 2 sancho sancho 4.0K Feb 13 16:22 . drwxr-xr-x 43 sancho sancho 4.0K Feb 13 16:22 .. -rwxrwxr-x 1 sancho sancho 126M Feb 13 03:01 blocks-0-200000 -rwxrwxr-x 1 sancho sancho 240M Feb 13 03:32 blocks-1000001-1200000 -rwxrwxr-x 1 sancho sancho 274M Feb 13 03:33 blocks-1200001-1400000 -rwxrwxr-x 1 sancho sancho 299M Feb 13 03:33 blocks-1400001-1600000 -rwxrwxr-x 1 sancho sancho 307M Feb 13 03:40 blocks-1600001-1800000 -rwxrwxr-x 1 sancho sancho 290M Feb 13 03:40 blocks-1800001-2000000 -rwxrwxr-x 1 sancho sancho 301M Feb 13 03:41 blocks-2000001-2200000 -rwxrwxr-x 1 sancho sancho 148M Feb 13 03:02 blocks-200001-400000 -rwxrwxr-x 1 sancho sancho 332M Feb 13 03:41 blocks-2200001-2400000 ... The code for the app I'm using:

package analytics

import collection.JavaConverters._ import org.apache.hadoop.conf.Configuration import org.apache.hadoop.io.BytesWritable import org.apache.spark.SparkConf import org.apache.spark.sql.SparkSession import org.zuinnote.hadoop.ethereum.format.common.EthereumBlock import org.zuinnote.hadoop.ethereum.format.mapreduce.EthereumBlockFileInputFormat

object TestApp {

def main(args: Array[String]): Unit = { val sparkConf = new SparkConf() .setAppName("test-app") .setMaster("local[*]")

val spark = SparkSession.builder
  .config(sparkConf)
  .getOrCreate()

val hadoopConf = new Configuration()
hadoopConf.set("hadoopcryptoledeger.ethereumblockinputformat.usedirectbuffer", "false")

val rdd = spark.sparkContext.newAPIHadoopFile(
  "/home/sancho/eth-blockchain",
  classOf[EthereumBlockFileInputFormat], classOf[BytesWritable], classOf[EthereumBlock], hadoopConf
)

println("Number of transactions with negative gas price: " + rdd
  .flatMap(_._2.getEthereumTransactions.asScala)
  .filter(_.getGasPrice < 0)
  .count()
)
println("Number of transactions with negative gas limit: " + rdd
  .flatMap(_._2.getEthereumTransactions.asScala)
  .filter(_.getGasLimit < 0)
  .count()
)

val blockNumber = 4800251

println(s"Number of transactions with negative gas price in block $blockNumber: " + rdd
    .filter(_._2.getEthereumBlockHeader.getNumber == blockNumber)
    .flatMap(_._2.getEthereumTransactions.asScala)
    .filter(_.getGasPrice < 0)
    .count()
)

println(s"Number of transactions with negative gas limit in block $blockNumber: " + rdd
  .filter(_._2.getEthereumBlockHeader.getNumber == blockNumber)
  .flatMap(_._2.getEthereumTransactions.asScala)
  .filter(_.getGasLimit < 0)
  .count()
)

} } This is the build.sbt file:

lazy val commonSettings = Seq( scalaVersion := "2.11.7", test in assembly := {} )

lazy val ethBlockchainAnalytics = (project in file(".")). settings(commonSettings). settings( name := "EthBlockchainAnalytics", version := "0.1", libraryDependencies ++= Seq( "com.github.zuinnote" %% "spark-hadoopcryptoledger-ds" % "1.1.2", "org.apache.spark" %% "spark-core" % "2.2.1" % "provided", "org.apache.spark" %% "spark-sql" % "2.2.1" % "provided"), assemblyJarName in assembly := s"${name.value}${scalaBinaryVersion.value}-${version.value}.jar", assemblyMergeStrategy in assembly := { case PathList("META-INF", xs@) => MergeStrategy.discard case PathList("javax", "servlet", xs@_) => MergeStrategy.last case PathList("org", "apache", xs@_*) => MergeStrategy.last case x => val oldStrategy = (assemblyMergeStrategy in assembly).value oldStrategy(x) } ) The launcher script I'm using:

!/bin/sh

JAR=$1

/usr/local/lib/spark/bin/spark-submit \ --class analytics.TestApp \ --driver-memory 20G \ $JAR And finally the command I'm using to run it:

➜ EthBlockchainAnalytics src/main/resources/launcher.sh /home/sancho/IdeaProjects/EthBlockchainAnalytics/target/scala-2.11/EthBlockchainAnalytics_2.11-0.1.jar The output of the above application when run like this is:

Number of transactions with negative gas price: 8732263 Number of transactions with negative gas limit: 25699923 Number of transactions with negative gas price in block 4800251: 2 Number of transactions with negative gas limit in block 4800251: 8

As a quick sanity check, I ran the following:

➜ ~ geth attach Welcome to the Geth JavaScript console!

instance: Geth/v1.7.3-stable-4bb3c89d/linux-amd64/go1.9 modules: admin:1.0 debug:1.0 eth:1.0 miner:1.0 net:1.0 personal:1.0 rpc:1.0 txpool:1.0 web3:1.0

var txs = eth.getBlock(4800251).transactions undefined for (var i=0; i<txs.length; i++) { if (eth.getTransaction(txs[i]).gasPrice < 0) console.log(txs[i]) } undefined Any idea why I'm seeing so many negative gas prices and gas limits when using Spark?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

sanchopansa commented 6 years ago

Thank you for the quick reply!

To answer your questions from above:

The geth console is returning undefined because it is empty output (i.e. there are no transactions with negative gasPrice), but here is a small portion of the output for txs with positive gasPrice:

> for (var i=0; i<txs.length; i++) { if (eth.getTransaction(txs[i]).gasPrice > 0) console.log(txs[i]) }
0xfb0e802cf96bb858269a79588372f13733c70cdc146299248b926922ad49d33e
0xf602219098c75bbb165978e4533b1aa6aa9d1af0e01825173a02561ddf0235e0
0xb271dffd8a06f0d2d4db04dfdb3708c7cfa4f8797e57fc35b18507fa617ea4de
0x23716201794691777552e49d215bb04f4021207a8f2f31cbe72289f90d6492a9

And also the details of a random tx in this block:

> eth.getTransaction('0x48a1929963e9adfe144e94151f8ad3f76c8870c7a058ea8f65ba99ed823d3c2d')
{
  blockHash: "0x41d6f0bfc69031b20fd9743c77c1e39db52460d5e2a2cdb7ee328320ddb8ab4f",
  blockNumber: 4800251,
  from: "0x934f5fc2dd08e4ce5040be9a53826b2dc715667c",
  gas: 250000,
  gasPrice: 8000000000,
  hash: "0x48a1929963e9adfe144e94151f8ad3f76c8870c7a058ea8f65ba99ed823d3c2d",
  input: "0x0a19b14a00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000140a3dcf8389400000000000000000000000000cb5a05bef3257613e984c17dbcf039952b6d883f00000000000000000000000000000000000000000000000000000017177b3bc000000000000000000000000000000000000000000000000000000000004965b2000000000000000000000000000000000000000000000000000000004dca9ea80000000000000000000000001b87c2a6058bc88548bc9bb18b0717f939b2ccb3000000000000000000000000000000000000000000000000000000000000001b9393bff9304b709e35089811e0c0974cca435bee2d4f4da19243c43304d3e25457ae43f6befc2ed9fa8cfa2bdfcf1499d1c721abea622a6b38881b4e7f7f8b6e0000000000000000000000000000000000000000000000000108cf25990cd928",
  nonce: 177,
  r: "0xad9700768f580f8dbe0903b59a3a4199f2e7b0aa9cc821a3315712121f22798b",
  s: "0x3e47da4fa5145ef9406d55492d680d97e34e56e0db3f6e0e50040d56656c9ac5",
  to: "0x8d12a197cb00d4747a1fe03395095ce2a5cc6819",
  transactionIndex: 39,
  v: "0x1c",
  value: 0
}

which appears to be matching exactly with the details for the same tx on etherscan.io.

While I'm not excluding the possibility that my local geth db could be corrupted, I haven't seen any evidence that this is the case yet (I've checked quite a few txs manually since yesterday and they all seem to be OK).

Modifying the above Spark app to count txs with strictly greater than 0 gas price/gas limit yields the following output:

Number of transactions with positive gas price: 141089565                       
Number of transactions with positive gas limit: 124160283                       
Number of transactions with positive gas price in block 4800251: 67             
Number of transactions with positive gas limit in block 4800251: 61             

This combined with the output for the txs with negative gas price/gas limit in block 4800251, leads to a total number of 69 txs in the block, which also matches the total number of txs according to etherscan.io

jornfranke commented 6 years ago

I reproduced the error in a unit test

jornfranke commented 6 years ago

fixed (see latest commit), as suspected, it was that Java has a sign, whereby Ethereum has not a sign. I will release the fix the coming days.

thank you a lot for reporting.

sanchopansa commented 6 years ago

Thank you for the quick fix! Apologies for the n00b question, but what would be the easiest way for me to use the library with the fix before you do the release?

jornfranke commented 6 years ago

https://github.com/ZuInnoTe/hadoopcryptoledger/wiki/Hadoop-File-Format#build

and then you need to update your build.sbt to include version 1.1.3 of the hadoopcryptoledger library

sanchopansa commented 6 years ago

I can confirm the fix is working. Thanks!

jornfranke commented 6 years ago

Thank you for reporting and the feedback. I try to publish tonight. Soon we will put some more effort to add also some new features.

On 13. Feb 2018, at 23:34, Valentin Ganev notifications@github.com wrote:

I can confirm the fix is working. Thanks!

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

sanchopansa commented 6 years ago

Awesome, looking forward to it! I'll be using the library in the next few days/weeks and will get back to you in case I find some other issues.

Maybe I could help with some testing (or development) or the new features as well in case you're looking for more hands :)

jornfranke commented 6 years ago

Sure always appreciated.

On 14. Feb 2018, at 00:06, Valentin Ganev notifications@github.com wrote:

Awesome, looking forward to it! I'll be using the library in the next few days/weeks and will get back to you in case I find some other issues.

Maybe I could help with some testing (or development) or the new features as well in case you're looking for more hands :)

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

jornfranke commented 6 years ago

fixed in released version 1.1.3

jornfranke commented 6 years ago

I fear this one introduced another issue because when calculating the transaction hash it assumes a signed type... so everywhere were we remove the sign the transaction hash is not calculated correctly.

Will have to check tonight.

On 14. Feb 2018, at 00:06, Valentin Ganev notifications@github.com wrote:

Awesome, looking forward to it! I'll be using the library in the next few days/weeks and will get back to you in case I find some other issues.

Maybe I could help with some testing (or development) or the new features as well in case you're looking for more hands :)

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.