Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

In a few cases, PySpark's internal code needs take care to avoid including unserializable objects in function closures. For example, consider this excerpt from rdd.py that wraps a user-defined function to batch its output:

Code Block
python
python

            oldfunc = self.func
            batchSize = self.ctx.batchSize
            def batched_func(split, iterator):
                return batched(oldfunc(split, iterator), batchSize)
            func = batched_func

...

Even with only one serializer, there are still some subtleties here due to how PySpark handles text files. PySpark implements SparkContext.textFile() by directly calling its Java equivalent. This produces a JavaRDD[String] instead of a JavaRDD[byte[]]. JavaRDD transfers these strings to Python workers using Java's MUTF-8 encoding.

Wiki Markup
{footnote}Prior to this pull request, JavaRDD would send strings to Python as pickled UTF-8 strings by prepending the appropriate pickle opcodes.  From the worker's point of view, all of its incoming data was in the same pickled format.  The pull request removed all Python-pickle-specific code from JavaRDD.{footnote}

To handle these cases, PySpark allows a stage's input deserialization and output serialization functions to come from different serializers. For example, in sc.textFile(..).map(lambda x: ...).groupByKey() the first pipeline stage would use a MUTF8Deserializer and PickleSerializer, and subsequent stages would use PickleSerializers for their inputs and outputs. PySpark uses the lineage graph to perform the bookkeeping to select the appropriate deserializers.

 

At the moment, union() requires that its inputs were serialized with the same serializer. When unioning an untransformed RDD created with sc.textFile() with a transformed RDD, cartesian() product, or RDD created with parallelize(), PySpark will force some of the RDDs to be re-serialized using the default serializer. We might be able to add code to avoid this re-serialization, but it would add extra complexity and these particular union() usages seem uncommon.

In the long run, it would be nice to refactor the Java-side serialization logic so that it can apply different interpretations to the bytes that it receives from Python (e.g. unpack them into UTF-8 strings or MsgPack objects). We could also try to remove the assumption that Python sends framed input back to Java, but this this might require a separate socket for sending control messages and exceptions). In the very long term, we might be able to generalize PythonRDD's protocol to a point where we can use the same code to support backends written in other languages (this would effectively be like pipe(), but with a more complicated binary protocol).

 

Wiki Markup
{display-footnotes}

 

Execution and pipelining

PySpark pipelines transformations by composing their functions. When using PySpark, there's a one-to-one correspondence between PySpark stages and Spark scheduler stages. Each PySpark stage corresponds to a PipelinedRDD instance.

...

This approach required some complicated tricks in order to convert the results of Java operations back into pickled data. For example, a leftOuterJoin might produce an JavaRDD[(String, (String, Option[String])]:

Code Block
python
python

>>> x = sc.parallelizePairs([("a", 1), ("b", 4)|("a", 1), ("b", 4)])
>>> y = sc.parallelizePairs([("a", 2)|("a", 2)])
>>> print x.leftOuterJoin(y) ._jrdd.collect().toString()
[(UydiJwou,(STQKLg==,None)), (UydhJwou,(STEKLg==,Some(STIKLg==)))|(UydiJwou,(STQKLg==,None)), (UydhJwou,(STEKLg==,Some(STIKLg==)))]

...

This approach's correctness relied on serialized data equality being equivalent to Python object equality. However, there are some cases where logically identical objects can produce different pickles. This doesn't happen in most cases, but it can easily occur with dictionaries:

Code Block
python
python

>>> import pickle
>>> a = {1: 0, 9: 0}
>>> b = {9: 0, 1: 0}
>>> a == b
True
>>> pickle.dumps(a) == pickle.dumps(b)
False

...