| join {SparkR} | R Documentation |
Join two DataFrames based on the given join expression.
## S4 method for signature 'DataFrame,DataFrame' join(x, y, joinExpr = NULL, joinType = NULL)
x |
A Spark DataFrame |
y |
A Spark DataFrame |
joinExpr |
(Optional) The expression used to perform the join. joinExpr must be a Column expression. If joinExpr is omitted, join() will perform a Cartesian join |
joinType |
The type of join to perform. The following join types are available: 'inner', 'outer', 'full', 'fullouter', leftouter', 'left_outer', 'left', 'right_outer', 'rightouter', 'right', and 'leftsemi'. The default joinType is "inner". |
A DataFrame containing the result of the join operation.
Other DataFrame functions: $,
$<-, select,
select,
select,DataFrame,Column-method,
select,DataFrame,list-method,
selectExpr; DataFrame-class,
dataFrame, groupedData;
[, [, [[,
subset; agg,
agg,
count,GroupedData-method,
summarize, summarize;
arrange, arrange,
arrange, orderBy,
orderBy; as.data.frame,
as.data.frame,DataFrame-method;
attach,
attach,DataFrame-method;
cache; collect;
colnames, colnames,
colnames<-, colnames<-,
columns, names,
names<-; coltypes,
coltypes, coltypes<-,
coltypes<-; columns,
dtypes, printSchema,
schema, schema;
count, nrow;
describe, describe,
describe, summary,
summary,
summary,PipelineModel-method;
dim; distinct,
unique; dropna,
dropna, fillna,
fillna, na.omit,
na.omit; dtypes;
except, except;
explain, explain;
filter, filter,
where, where;
first, first;
groupBy, groupBy,
group_by, group_by;
head; insertInto,
insertInto; intersect,
intersect; isLocal,
isLocal; limit,
limit; merge,
merge; mutate,
mutate, transform,
transform; ncol;
persist; printSchema;
rbind, rbind,
unionAll, unionAll;
registerTempTable,
registerTempTable; rename,
rename, withColumnRenamed,
withColumnRenamed;
repartition; sample,
sample, sample_frac,
sample_frac;
saveAsParquetFile,
saveAsParquetFile,
write.parquet, write.parquet;
saveAsTable, saveAsTable;
saveDF, saveDF,
write.df, write.df,
write.df; selectExpr;
showDF, showDF;
show, show,
show,GroupedData-method; str;
take; unpersist;
withColumn, withColumn;
write.json, write.json;
write.text, write.text
## Not run:
##D sc <- sparkR.init()
##D sqlContext <- sparkRSQL.init(sc)
##D df1 <- read.json(sqlContext, path)
##D df2 <- read.json(sqlContext, path2)
##D join(df1, df2) # Performs a Cartesian
##D join(df1, df2, df1$col1 == df2$col2) # Performs an inner join based on expression
##D join(df1, df2, df1$col1 == df2$col2, "right_outer")
## End(Not run)