Fork me on GitHub

A Whirlwind Tour of Scalameta

For any Scalameta related questions, don't hesitate to ask on our gitter channel: Join the chat at

Note. This tutorial was originally created for a workshop at the conference. The workshop material turned out to be useful for many so it has been moved here. You will still find occasional references to


There are several ways to install and run Scalameta.


You can use Scalameta as a library, the scalameta umbrella package includes modules for trees, tokens, parsing, pretty printing, semantic API and more,
libraryDependencies += "org.scalameta" %% "scalameta" % "2.0.0-RC1"
Optionally, for extra utilities, you can use Contrib
libraryDependencies += "org.scalameta" %% "contrib" % "2.0.0-RC1"


The examples mentioned in this tutorial are available in a single repository that you can clone and run locally.
  1. Clone the workshop repo. Alternatively, for a minimal project template that uses the bleeding edge version of scalameta/paradise, clone this repo.
  2. Run sbt test to make sure everything works.
  3. Open the file Playground.scala
    package scalaworld
    import scala.meta._
    import org.scalameta.logger // useful for debugging
    class Playground extends org.scalatest.FunSuite {
      import scala.meta._
      test("part 1: tokens") {
        val tokens = "val x = 2".tokenize.get
      test("part 2: trees") {
        val tree = "val x = 2".parse[Stat].get
  4. To test playground on every edit, run sbt ~library/test.
  5. Setup the project in your favorite IDE, for example IntelliJ, ENSIME or vim.

Ammonite REPL

To experiment with Scalameta in the REPL, you can run the following in the Ammonite REPL
import $ivy.`org.scalameta:scalameta_2.11:2.0.0-RC1`, scala.meta._
Note. The macro annotation examples will not run in the REPL, follow scalameta/paradise#10 for updates.


To accompany the workshop, here is the recording from the original conference talk.


Make sure you have Scalameta installed as a library from Setup. You can decide to run these examples from the console or from sbt, for example in the tutorial repo.

This whole workshop will assume you have this import in scope:

scala> import scala.meta._
import scala.meta._

Here's how to tokenize a small expression.

scala> "val x = 2".tokenize.get
res0: scala.meta.tokens.Tokens = Tokens(, val,  , x,  , =,  , 2, )
Let's discuss the most interesting methods on tokens.


The simplest method we can call is Tokens.syntax The method returns a string representation of the actual code behind the tokens, or how the code should look like to a developer.

scala> "val x = 2".tokenize.get.syntax
res0: String = val x = 2

Tokens.toString() uses .syntax behind the scenes. However, you should never rely on toString() when manipulating Scalameta structures, prefer to explicitly call .syntax. It's maybe not so obvious why right now but it will make more sense soon.


Another useful method is Tokens.structure. The method shows details that may be relevant to us as metaprogrammers.

scala> "val x = 2".tokenize.get.structure
res0: String = Tokens(BOF [0..0), val [0..3),   [3..4), x [4..5),   [5..6), = [6..7),   [7..8), 2 [8..9), EOF [9..9))

.structure is often useful for debugging and testing.

Tokens vs. Token

The class Tokens is a wrapper around a sequence of Token objects. There are multiple subtypes of Token while there only one type Tokens.

scala> "val x = 2".tokenize.get.head
res0: scala.meta.tokens.Token =

BOF stands for "Beginning of file". Let's see what other kinds of token types are in the string
scala> "val x = 2".tokenize.get.
  map(x => f"${x.structure}%10s -> ${x.productPrefix}").
res0: String =
BOF [0..0) -> BOF
val [0..3) -> KwVal
    [3..4) -> Space
  x [4..5) -> Ident
    [5..6) -> Space
  = [6..7) -> Equals
    [7..8) -> Space
  2 [8..9) -> Int
EOF [9..9) -> EOF

Even spaces get their own tokens. The [0...3) part indicates that the val tokens start at offset 0 and ends at offset 3.


How does token equality look like?

scala> "foobar".tokenize.get(1) == "foobar kas".tokenize.get(1)
res0: Boolean = false
Huh, why are they not the same?

Token equality is implemented with reference equality. You need to be explicit if you actually mean syntactic (.syntax), or structural (.structure) equality.

The tokens are syntactically equal.
scala> "foobar".tokenize.get(1).syntax == "foobar kas".tokenize.get(1).syntax
res0: Boolean = true
Even if we move the tokens around
scala> "kas foobar".tokenize.get(3).syntax == "foobar kas".tokenize.get(1).syntax
res0: Boolean = true
The tokens are also structurally equal.
scala> "foobar".tokenize.get(1).structure == "foobar kas".tokenize.get(1).structure
res0: Boolean = true
However, they are not structurally equal if we move them around.
scala> "kas foobar".tokenize.get(3).structure == "foobar kas".tokenize.get(1).structure
res0: Boolean = false


Tokenization can sometimes fail, for example in this case:

scala> """ val str = "unclosed literal """.tokenize
res0: scala.meta.tokenizers.Tokenized =
<input>:1: error: unclosed string literal
val str = "unclosed literal

If you prefer, you can safely pattern match on the tokenize result

scala> """ val str = "closed literal" """.tokenize match {
  case tokenizers.Tokenized.Success(tokenized) => tokenized
  case tokenizers.Tokenized.Error(e, _, _) => ???
res0: scala.meta.tokens.Tokens = Tokens(,  , val,  , str,  , =,  , "closed literal",  , )


Scalameta tokens are the foundation of Scalameta. Sometimes you don't have access to a parsed AST and then your best shot is work with tokens.

In the following chapter we will discuss another exciting data structure: the incredible scala.meta.Tree.


Reminder. We assume you have this import in scope:

scala> import scala.meta._
import scala.meta._


The easiest way to get started with Scalameta trees is using quasiquotes.

scala> q"case class User(name: String, age: Int)"
res0: meta.Defn.Class = case class User(name: String, age: Int)

Quasiquotes can be composed

scala> val method = q"def `is a baby` = age < 1"
method: meta.Defn.Def = def `is a baby` = age < 1

scala> q"""
case class User(name: String, age: Int) {
res0: meta.Defn.Class = case class User(name: String, age: Int) { def `is a baby` = age < 1 }

Quasiquotes can also be used to deconstruct trees with pattern matching

scala> q"def `is a baby` = age < 1" match {
  case q"def $name = $body" =>
    s"You ${name.syntax} if your ${body.syntax}"
res0: String = You `is a baby` if your age < 1

NOTE. Quasiquotes currently ignore comments:

scala> q"val x = 2 // assignment".syntax
res0: String = val x = 2
If you need comments, you can use .parse[T]
scala> "val x = 2 // assignment".parse[Stat].get.syntax
res0: String = val x = 2 // assignment


If the contents that you want to parse are only known at runtime, you can't use quasiquotes. For example, this happens when you need to parse file contents.

Here's how to parse a compilation unit.

scala> "object Main extends App { println(1) }".parse[Source].get
res0: scala.meta.Source = object Main extends App { println(1) }

Pro tip. You can also call .parse[T] on a File, just like this

scala> new"readme/ParseMe.scala").parse[Source]
res0: scala.meta.parsers.Parsed[scala.meta.Source] =
class ParseMe { println("I'm inside a file") }

If we try to parse a statement as a compilation unit we will fail.

scala> "val x = 2".parse[Source]
res0: scala.meta.parsers.Parsed[scala.meta.Source] =
<input>:1: error: expected class or object definition
val x = 2

We need to explicitly parse it as a statement (Stat).

scala> "val x = 2".parse[Stat].get
res0: scala.meta.Stat = val x = 2

We can also parse case statement

scala> "case Foo(bar) if bar > 2 => println(bar)".parse[Case].get
res0: scala.meta.Case = case Foo(bar) if bar > 2 => println(bar)

Scalameta has dozens of parsers:

However, .parse[Stat] and .parse[Source] are usually all you need.


I didn't tell the whole story when I said you need to pass in a type argument to parse statements. You also need to pass in a dialect! However, Scalameta will by default pick the Scala211 dialect for you if you don't provide one explicitly.

With the SBT dialects, we can parse vals as top-level statements.

scala> dialects.Sbt0137(
  "lazy val core = project.settings(commonSettings)"
res0: scala.meta.Source = lazy val core = project.settings(commonSettings)

We can even parse multiple top level statements

scala> dialects.Sbt0137(
  lazy val core = project.settings(commonSettings)

  lazy val extra = project.dependsOn(core)
res0: scala.meta.Source =

  lazy val core = project.settings(commonSettings)

  lazy val extra = project.dependsOn(core)

For the remainder of the workshop, we will only work with the Scala211 dialect.


Just like with tokens, we can also run .syntax on trees.

scala> "foo(bar)".parse[Stat].get.syntax
res0: String = foo(bar)
However, Scalameta can also do this even if you manually construct the tree
scala> Term.Apply(
  Term.Name("bar") :: Nil
res0: String = foo(bar)

We never gave Scalameta parentheses but still it figured out we needed them. Pretty cool huh.


Just like with tokens, we can also run .structure on trees.

scala> "foo(bar)".parse[Stat].get.structure
res0: String = Term.Apply(Term.Name("foo"), List(Term.Name("bar")))

.structure ignores any syntactic trivia like whitespace and comments

scala> "foo  ( /* this is a comment */ bar  ) // eol".parse[Stat].get.structure
res0: String = Term.Apply(Term.Name("foo"), List(Term.Name("bar")))

This can be useful for example in debugging, testing or equality checking.


You can collect on Scalameta.Tree just like regular collections.

scala> source"""sealed trait Op[A]
    object Op extends B {
      case class Foo(i: Int) extends Op[Int]
      case class Bar(s: String) extends Op[String]
    }""".collect { case cls: Defn.Class => }
res0: List[meta.Type.Name] = List(Foo, Bar)


Transform Scalameta.Tree with .transform.

scala> q"myList.filter(_ > 3 + a).headOption // comments are removed :(".transform {
  case q"$lst.filter($cond).headOption" => q"$lst.find($cond)"
res0: scala.meta.Tree = myList.find(_ > 3 + a)

.transform does not preserve syntactic details such as comments and formatting. There has been made some work on source aware transformation, see #457, but it still requires a bit more work.


Just like with tokens, tree equality is by default by reference:

scala> q"foo(bar)" == q"foo(bar)"
res0: Boolean = false
This means you need to be explicit if you mean syntactic equality
scala> q"foo(bar)".syntax == q"foo(bar)".syntax
res0: Boolean = true

or structural equality

scala> q"foo(bar)".structure == q"foo(bar)".structure
res0: Boolean = true

Comprehensive trees

A key feature of Scalameta trees is that they comprehensively cover all corners of the Scala syntax. A side effect of this is that the Scalameta tree hierarchy contains a lot of types. For example, there is a different tree node for an abstract def (Decl.Def)

scala> q"def add(a: Int, b: Int)" // Decl.Def
res0: meta.Decl.Def = def add(a: Int, b: Int): Unit
and a def with an implementation (Defn.Def)
scala> q"def add(a: Int, b: Int) = a + b" // Defn.Def
res0: meta.Defn.Def = def add(a: Int, b: Int) = a + b

Fortunately, most of the time you won't need to worry about this. Quasiquotes help you create/match/compose/deconstruct the correct instances. However, occasionally you may need to debug the types of the trees you have.

For your convenience, I've compiled together the most common types in this handy diagram:

Macro annotations

This section has now moved to scalameta/paradise.

Semantic API

The semantic API offers operations to query information from the Scala compiler such as name resolution (println => _root_.scala.Predef.println), symbol signatures ([Ljava/lang/String;)V. => def main(args: Array[String]): Unit), These operations can for example be used by tools like scalafix to rewrite Scala source code. The semantic API cannot be used from Macro annotations.


A key property of the Scalameta semantic API is its portability. "SemanticDB" is the storage format for information that the semantic API consumes.

Currently, a typical approach to semantic developer tools in Scala is implementing them as compiler plugins and then using them inside builds. Apart from being a hassle to configure, this approach is also quite slow, because it needs to run a Scala typechecker every time when a tool is invoked.

Scalameta enables a much more convenient workflow. First, we use our semanticdb-scalac compiler plugin to generate a semantic database for it during type checking. This is done only once per unique snapshot of the codebase. Afterwards, using the persisted semantic data, we can launch any number of developer tools any number of times without having to typecheck the codebase again.

The storage format used for the SemanticDB is defined using Protocol Buffers, or "protobuf" for short. The SemanticDB protobuf schema is small, at the time of this writing it is ~50 lines of code. The full schema is available here. Files containing SemanticDB binary data use the .semanticdb file extension by convention.


To build a SemanticDB with the Scala 2.x compiler, you will need the semanticdb-scalac compiler plugin. This compiler plugin is developed in the main Scalameta repository. There are several ways to install semanticdb-scalac into your build.


The required setup for the Semantic API is more involved than for the Syntactic API (parsing/tokenizing). Most importantly, we first need to compile source files with the Scala compiler and semanticdb-scalac compiler plugin in order to collect information such as types, symbols and more. While it's possible to tokenize or parse code that doesn't compile, a source file must successfully typecheck in order for it to be analyzed with the Semantic API.


For a plug and play example repository, clone sbt-semantic-example and run sbt app/run


lazy val semanticdb-scalacSettings = Seq(
  addCompilerPlugin("org.scalameta" % "semanticdb-scalac" % "2.0.0-RC1" cross CrossVersion.full),
  scalacOptions ++= Seq(
Add the following settings your project like this
lazy val analyzeMe = project.settings(
  // semanticdb-scalac supports the Scala versions: 2.11.11, 2.12.3.
  // Note, the Scala version must match down to the latest (PATCH) number.
  scalaVersion := "2.11.11",

Other build tools

You can use semanticdb-scalac outside of sbt by passing in custom scalac flags.
// -Xplugin-require:semanticdb-scalac tells scalac to fail fast if semanticdb-scalac failed to load
// -Yrangepos is required by semanticdb-scalac
scalac -Xplugin:/path/to/semanticdb-scalac.jar -Yrangepos -Xplugin-require:semanticdb-scalac Code.scala

scalameta/tutorial semantic/run

If you have cloned the repo for this tutorial as explained in Tutorial, you can execute sbt semantic/run to run an example application using the semantic API.


If you have coursier installed, you can run this one-liner assuming you want to analyze a source file Foo.scala
// make sure the version of scalac is 2.11.11
scalac -Yrangepos -Xplugin-require:semanticdb-scalac -Xplugin:$(coursier fetch --intransitive org.scalameta:semanticdb-scalac_2.11.11:1.8.0) Foo.scala

Verify installation

To verify that the semanticdb-scalac compiler plugin is installed If you cannot find any .semanticdb files, then something went wrong with the setup. Don't hesitate to ask questions on Join the chat at


It's possible to build a SemanticDB for .sbt and .scala sources of sbt 0.13 builds. For more details, see scalafix/semanticdb-sbt.


Let's take a look at the inside of a .semanticdb file. Given a source file Input.scala

package scalaworld.semantic

case class Input(x: Int) {
  def +(other: Input) = Input(x + other.x)
  def +(other: Int) = Input(x + other)
  1 + 2
  List(x).map(num => Input(num + 1) + Input(2) + 3)

the pretty printed representation of its corresponding Input.semanticdb is the following:


[8..18): scalaworld => _root_.scalaworld.
[19..27): semantic => _root_.scalaworld.semantic.
[40..45): Input <= _root_.scalaworld.semantic.Input#
[45..45): ε <= _root_.scalaworld.semantic.Input#`<init>`(I)V.
[46..47): x <= _root_.scalaworld.semantic.Input#(x)
[49..52): Int => _root_.scala.Int#
[62..63): + <= _root_.scalaworld.semantic.Input#`+`(Lscalaworld/semantic/Input;)Lscalaworld/semantic/Input;.
[64..69): other <= _root_.scalaworld.semantic.Input#`+`(Lscalaworld/semantic/Input;)Lscalaworld/semantic/Input;.(other)
[71..76): Input => _root_.scalaworld.semantic.Input#
[80..85): Input => _root_.scalaworld.semantic.Input.
[86..87): x => _root_.scalaworld.semantic.Input#(x)
[88..89): + => _root_.scala.Int#`+`(I)I.
[90..95): other => _root_.scalaworld.semantic.Input#`+`(Lscalaworld/semantic/Input;)Lscalaworld/semantic/Input;.(other)
[96..97): x => _root_.scalaworld.semantic.Input#(x)
[105..106): + <= _root_.scalaworld.semantic.Input#`+`(I)Lscalaworld/semantic/Input;.
[107..112): other <= _root_.scalaworld.semantic.Input#`+`(I)Lscalaworld/semantic/Input;.(other)
[114..117): Int => _root_.scala.Int#
[121..126): Input => _root_.scalaworld.semantic.Input.
[127..128): x => _root_.scalaworld.semantic.Input#(x)
[129..130): + => _root_.scala.Int#`+`(I)I.
[131..136): other => _root_.scalaworld.semantic.Input#`+`(I)Lscalaworld/semantic/Input;.(other)
[142..143): + => _root_.scala.Int#`+`(I)I.
[148..152): List => _root_.scala.collection.immutable.List.
[153..154): x => _root_.scalaworld.semantic.Input#(x)
[156..159): map => _root_.scala.collection.immutable.List#map(Lscala/Function1;Lscala/collection/generic/CanBuildFrom;)Ljava/lang/Object;.
[160..163): num <= semantic/input/src/main/scala/scalaworld/semantic/Input.scala@160..163
[167..172): Input => _root_.scalaworld.semantic.Input.
[173..176): num => semantic/input/src/main/scala/scalaworld/semantic/Input.scala@160..163
[177..178): + => _root_.scala.Int#`+`(I)I.
[182..183): + => _root_.scalaworld.semantic.Input#`+`(Lscalaworld/semantic/Input;)Lscalaworld/semantic/Input;.
[184..189): Input => _root_.scalaworld.semantic.Input.
[193..194): + => _root_.scalaworld.semantic.Input#`+`(I)Lscalaworld/semantic/Input;.

[140..145): [warning] a pure expression does nothing in statement position; you may be omitting necessary parentheses

_root_.scala.Int# => abstract final class Int
_root_.scala.Int#`+`(I)I. => abstract def +: (x: Int): Int
  [4..7): Int => _root_.scala.Int#
  [10..13): Int => _root_.scala.Int#
_root_.scala.Int#`<init>`()V. => primaryctor <init>: (): Int
  [4..7): Int => _root_.scala.Int#
_root_.scala.collection.immutable.List#map(Lscala/Function1;Lscala/collection/generic/CanBuildFrom;)Ljava/lang/Object;. => final def map: [B, That] => (f: Function1[A, B])(implicit bf: CanBuildFrom[List[A], B, That]): That
  [17..26): Function1 => _root_.scala.Function1#
  [27..28): A => _root_.scala.collection.immutable.List#[A]
  [30..31): B => _root_.scala.collection.immutable.List#map(Lscala/Function1;Lscala/collection/generic/CanBuildFrom;)Ljava/lang/Object;.[B]
  [47..59): CanBuildFrom => _root_.scala.collection.generic.CanBuildFrom#
  [60..64): List => _root_.scala.collection.immutable.List#
  [65..66): A => _root_.scala.collection.immutable.List#[A]
  [69..70): B => _root_.scala.collection.immutable.List#map(Lscala/Function1;Lscala/collection/generic/CanBuildFrom;)Ljava/lang/Object;.[B]
  [72..76): That => _root_.scala.collection.immutable.List#map(Lscala/Function1;Lscala/collection/generic/CanBuildFrom;)Ljava/lang/Object;.[That]
  [80..84): That => _root_.scala.collection.immutable.List#map(Lscala/Function1;Lscala/collection/generic/CanBuildFrom;)Ljava/lang/Object;.[That]
_root_.scala.collection.immutable.List. => final object List
_root_.scalaworld. => package scalaworld
_root_.scalaworld.semantic. => package semantic
_root_.scalaworld.semantic.Input# => case class Input
_root_.scalaworld.semantic.Input#(x) => val param x: Int
  [0..3): Int => _root_.scala.Int#
_root_.scalaworld.semantic.Input#`+`(I)Lscalaworld/semantic/Input;. => def +: (other: Int): Input
  [8..11): Int => _root_.scala.Int#
  [14..19): Input => _root_.scalaworld.semantic.Input#
_root_.scalaworld.semantic.Input#`+`(I)Lscalaworld/semantic/Input;.(other) => param other: Int
  [0..3): Int => _root_.scala.Int#
_root_.scalaworld.semantic.Input#`+`(Lscalaworld/semantic/Input;)Lscalaworld/semantic/Input;. => def +: (other: Input): Input
  [8..13): Input => _root_.scalaworld.semantic.Input#
  [16..21): Input => _root_.scalaworld.semantic.Input#
_root_.scalaworld.semantic.Input#`+`(Lscalaworld/semantic/Input;)Lscalaworld/semantic/Input;.(other) => param other: Input
  [0..5): Input => _root_.scalaworld.semantic.Input#
_root_.scalaworld.semantic.Input#`<init>`(I)V. => primaryctor <init>: (x: Int): Input
  [4..7): Int => _root_.scala.Int#
  [10..15): Input => _root_.scalaworld.semantic.Input#
_root_.scalaworld.semantic.Input. => final object Input
semantic/input/src/main/scala/scalaworld/semantic/Input.scala@160..163 => param num: Int
  [0..3): Int => _root_.scala.Int#

[85..85): *.apply
  [0..1): * => _star_.
  [2..7): apply => _root_.scalaworld.semantic.Input.apply(I)Lscalaworld/semantic/Input;.
[126..126): *.apply
  [0..1): * => _star_.
  [2..7): apply => _root_.scalaworld.semantic.Input.apply(I)Lscalaworld/semantic/Input;.
[152..152): *.apply[Int]
  [0..1): * => _star_.
  [2..7): apply => _root_.scala.collection.immutable.List.apply(Lscala/collection/Seq;)Lscala/collection/immutable/List;.
  [8..11): Int => _root_.scala.Int#
[159..159): *[Input, List[Input]]
  [0..1): * => _star_.
  [2..7): Input => _root_.scalaworld.semantic.Input#
  [9..13): List => _root_.scala.collection.immutable.List#
  [14..19): Input => _root_.scalaworld.semantic.Input#
[172..172): *.apply
  [0..1): * => _star_.
  [2..7): apply => _root_.scalaworld.semantic.Input.apply(I)Lscalaworld/semantic/Input;.
[189..189): *.apply
  [0..1): * => _star_.
  [2..7): apply => _root_.scalaworld.semantic.Input.apply(I)Lscalaworld/semantic/Input;.
[197..197): *(scala.collection.immutable.List.canBuildFrom[Input])
  [0..1): * => _star_.
  [47..52): Input => _root_.scalaworld.semantic.Input#
  [34..46): canBuildFrom => _root_.scala.collection.immutable.List.canBuildFrom()Lscala/collection/generic/CanBuildFrom;.

That is a lot of output, let's take a closer look at each of the sections: Names, Messages, Denotations, and Sugars.


Below are explanations of the terminology used in SemanticDB.


The Names section is a map from names/identifiers to the Symbol they reference. A name is an identifier such as List or +. A name can have different symbols depending on where the name appears. For example, + references three different symbols in the example above

// Int + Int
[88..89): +   => _root_.scala.Int#`+`(I)I.
// Bar + Bar
[62..63): +   => _root_.scalaworld.semantic.Input#`+`(Lscalaworld/semantic/Input;)Lscalaworld/semantic/Input;.
// Bar + Int
[105..106): + => _root_.scalaworld.semantic.Input#`+`(I)Lscalaworld/semantic/Input;.


The Denotations section is a map from Symbols that appear in the source file to the Denotation of the symbol.


The Messages section is a list of the reporter compiler messages (info/warn/error). Note, some reported messages like deprecation warnings are not yet collected, see #759.


The Synthetics section lists the code that is inserted/inferred by the compiler. Syntehtics don't appear in the original source file. Synthetics include inferred type parameters, type annotations of definitions, implicit arguments and more. It is possible to resolve symbols of synthetics, just like with regular names that appear in the source file.


Every definition such as a class/trait/val/var/def/object has a unique identifier. This unique identifier is called a Symbol. By convention, semanticdb-scalac uses the the erased JVM signature of the definition to represent a Scala symbol . This makes it possible to distinguish between overloaded methods. A symbol is an opaque identifier, to understand the signature of a symbol inspect its Denotation.


A denotation represents a Symbol's "meaning", including modifiers, visibility restrictions and type signature. A denotation can for example tell you if a definition is a trait or a class, if it has the implicit modifier or what it's return type is.


"Scalahost" is the old name for semanticdb-scalac.


"Sugars" is the old name for Synthetics.


Scalameta contrib is a module that provides common utilities for handling Scalameta data structures.

To use contrib, import scala.meta.contrib._.

Contrib exposes some collection-like methods on Tree.

scala> source"""
class A
trait B
object C
object D
res0: Option[scala.meta.Tree] = Some(object C)

scala> source"""
class A
trait B
object C {
  val x = 2
  val y = 3
object D
""".collectFirst { case q"val y = $body" =>  body.structure }
res1: Option[String] = Some(Lit.Int(3))

scala> source"""
class A
trait B
object C {
  val x = 2
  val y = 3
object D
res2: Boolean = false

Contrib has a Equal typeclass for comparing trees by structural or syntactic equality.

scala> q"val x = 2".isEqual(q"val x = 1")
res0: Boolean = false

scala> (q"val x = 2": Stat).isEqual("val x = 2 // comment".parse[Stat].get)
res1: Boolean = true

scala> (q"val x = 2": Stat).isEqual[Syntactically]("val x = 2 // comment".parse[Stat].get)
res2: Boolean = false

scala> q"lazy val x = 2".mods.exists(_.isEqual(mod"lazy"))
res3: Boolean = true

scala> q"lazy val x = 2".contains(q"3")
res4: Boolean = false

scala> q"lazy val x = 2".contains(q"2")
res5: Boolean = true

Contrib has an AssociatedCommments helper to extract leading and trailing comments of tree nodes.

scala> val code: Source = """
/** This is a docstring */
trait MyTrait // leading comment
code: scala.meta.Source =

/** This is a docstring */
trait MyTrait // leading comment

scala> val comments = AssociatedComments(code)
comments: scala.meta.contrib.AssociatedComments =
  Leading =
    trait [28..33) => List(/**∙This∙is∙a∙docstring∙*/)

  Trailing =


scala> val myTrait = code.find([Defn.Trait]).get
myTrait: scala.meta.Tree = trait MyTrait

scala> comments.leading(myTrait) -> comments.trailing(myTrait)
res0: (Set[meta.tokens.Token.Comment], Set[meta.tokens.Token.Comment]) = (Set(/** This is a docstring */),Set())



Originally, Scalameta was founded to become a better macro system for Scala, but over time we shifted focus to developer tools and spun off the new macro system into a separate project: scalamacros/scalamacros. See scalameta/paradise for more information about the deprecated scalameta macro annotations.

What is the quasiquote for X?

Here is an overview of quasiquote syntax:

Can I use Scalameta with Scala.js?

Yes, the main Scalameta modules support Scala.js.

I'd like to use the semantic api as a replacement for scalac's presentation compiler, is this doable? intended usage?

In principle, this should be doable, but it requires custom development and is not supported at the moment.

How do I use the Semantic API


Does Scalameta integrate with Zinc in order to achieve the semantic api?

No, Zinc and Scalameta and unrelated. The Scalameta Semantic API is enabled with the scalac compiler plugin semanticdb-scalac. semanticdb-scalac is designed to accommodate incremental compilation in order to play nicely with Zinc.

Where can I ask more questions?