You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

610 lines
25 KiB

  1. // Copyright 2013 The Go Authors. All rights reserved.
  2. // Use of this source code is governed by a BSD-style
  3. // license that can be found in the LICENSE file.
  4. /*
  5. Package pointer implements Andersen's analysis, an inclusion-based
  6. pointer analysis algorithm first described in (Andersen, 1994).
  7. A pointer analysis relates every pointer expression in a whole program
  8. to the set of memory locations to which it might point. This
  9. information can be used to construct a call graph of the program that
  10. precisely represents the destinations of dynamic function and method
  11. calls. It can also be used to determine, for example, which pairs of
  12. channel operations operate on the same channel.
  13. The package allows the client to request a set of expressions of
  14. interest for which the points-to information will be returned once the
  15. analysis is complete. In addition, the client may request that a
  16. callgraph is constructed. The example program in example_test.go
  17. demonstrates both of these features. Clients should not request more
  18. information than they need since it may increase the cost of the
  19. analysis significantly.
  20. CLASSIFICATION
  21. Our algorithm is INCLUSION-BASED: the points-to sets for x and y will
  22. be related by pts(y) pts(x) if the program contains the statement
  23. y = x.
  24. It is FLOW-INSENSITIVE: it ignores all control flow constructs and the
  25. order of statements in a program. It is therefore a "MAY ALIAS"
  26. analysis: its facts are of the form "P may/may not point to L",
  27. not "P must point to L".
  28. It is FIELD-SENSITIVE: it builds separate points-to sets for distinct
  29. fields, such as x and y in struct { x, y *int }.
  30. It is mostly CONTEXT-INSENSITIVE: most functions are analyzed once,
  31. so values can flow in at one call to the function and return out at
  32. another. Only some smaller functions are analyzed with consideration
  33. of their calling context.
  34. It has a CONTEXT-SENSITIVE HEAP: objects are named by both allocation
  35. site and context, so the objects returned by two distinct calls to f:
  36. func f() *T { return new(T) }
  37. are distinguished up to the limits of the calling context.
  38. It is a WHOLE PROGRAM analysis: it requires SSA-form IR for the
  39. complete Go program and summaries for native code.
  40. See the (Hind, PASTE'01) survey paper for an explanation of these terms.
  41. SOUNDNESS
  42. The analysis is fully sound when invoked on pure Go programs that do not
  43. use reflection or unsafe.Pointer conversions. In other words, if there
  44. is any possible execution of the program in which pointer P may point to
  45. object O, the analysis will report that fact.
  46. REFLECTION
  47. By default, the "reflect" library is ignored by the analysis, as if all
  48. its functions were no-ops, but if the client enables the Reflection flag,
  49. the analysis will make a reasonable attempt to model the effects of
  50. calls into this library. However, this comes at a significant
  51. performance cost, and not all features of that library are yet
  52. implemented. In addition, some simplifying approximations must be made
  53. to ensure that the analysis terminates; for example, reflection can be
  54. used to construct an infinite set of types and values of those types,
  55. but the analysis arbitrarily bounds the depth of such types.
  56. Most but not all reflection operations are supported.
  57. In particular, addressable reflect.Values are not yet implemented, so
  58. operations such as (reflect.Value).Set have no analytic effect.
  59. UNSAFE POINTER CONVERSIONS
  60. The pointer analysis makes no attempt to understand aliasing between the
  61. operand x and result y of an unsafe.Pointer conversion:
  62. y = (*T)(unsafe.Pointer(x))
  63. It is as if the conversion allocated an entirely new object:
  64. y = new(T)
  65. NATIVE CODE
  66. The analysis cannot model the aliasing effects of functions written in
  67. languages other than Go, such as runtime intrinsics in C or assembly, or
  68. code accessed via cgo. The result is as if such functions are no-ops.
  69. However, various important intrinsics are understood by the analysis,
  70. along with built-ins such as append.
  71. The analysis currently provides no way for users to specify the aliasing
  72. effects of native code.
  73. ------------------------------------------------------------------------
  74. IMPLEMENTATION
  75. The remaining documentation is intended for package maintainers and
  76. pointer analysis specialists. Maintainers should have a solid
  77. understanding of the referenced papers (especially those by H&L and PKH)
  78. before making making significant changes.
  79. The implementation is similar to that described in (Pearce et al,
  80. PASTE'04). Unlike many algorithms which interleave constraint
  81. generation and solving, constructing the callgraph as they go, this
  82. implementation for the most part observes a phase ordering (generation
  83. before solving), with only simple (copy) constraints being generated
  84. during solving. (The exception is reflection, which creates various
  85. constraints during solving as new types flow to reflect.Value
  86. operations.) This improves the traction of presolver optimisations,
  87. but imposes certain restrictions, e.g. potential context sensitivity
  88. is limited since all variants must be created a priori.
  89. TERMINOLOGY
  90. A type is said to be "pointer-like" if it is a reference to an object.
  91. Pointer-like types include pointers and also interfaces, maps, channels,
  92. functions and slices.
  93. We occasionally use C's x->f notation to distinguish the case where x
  94. is a struct pointer from x.f where is a struct value.
  95. Pointer analysis literature (and our comments) often uses the notation
  96. dst=*src+offset to mean something different than what it means in Go.
  97. It means: for each node index p in pts(src), the node index p+offset is
  98. in pts(dst). Similarly *dst+offset=src is used for store constraints
  99. and dst=src+offset for offset-address constraints.
  100. NODES
  101. Nodes are the key datastructure of the analysis, and have a dual role:
  102. they represent both constraint variables (equivalence classes of
  103. pointers) and members of points-to sets (things that can be pointed
  104. at, i.e. "labels").
  105. Nodes are naturally numbered. The numbering enables compact
  106. representations of sets of nodes such as bitvectors (or BDDs); and the
  107. ordering enables a very cheap way to group related nodes together. For
  108. example, passing n parameters consists of generating n parallel
  109. constraints from caller+i to callee+i for 0<=i<n.
  110. The zero nodeid means "not a pointer". For simplicity, we generate flow
  111. constraints even for non-pointer types such as int. The pointer
  112. equivalence (PE) presolver optimization detects which variables cannot
  113. point to anything; this includes not only all variables of non-pointer
  114. types (such as int) but also variables of pointer-like types if they are
  115. always nil, or are parameters to a function that is never called.
  116. Each node represents a scalar part of a value or object.
  117. Aggregate types (structs, tuples, arrays) are recursively flattened
  118. out into a sequential list of scalar component types, and all the
  119. elements of an array are represented by a single node. (The
  120. flattening of a basic type is a list containing a single node.)
  121. Nodes are connected into a graph with various kinds of labelled edges:
  122. simple edges (or copy constraints) represent value flow. Complex
  123. edges (load, store, etc) trigger the creation of new simple edges
  124. during the solving phase.
  125. OBJECTS
  126. Conceptually, an "object" is a contiguous sequence of nodes denoting
  127. an addressable location: something that a pointer can point to. The
  128. first node of an object has a non-nil obj field containing information
  129. about the allocation: its size, context, and ssa.Value.
  130. Objects include:
  131. - functions and globals;
  132. - variable allocations in the stack frame or heap;
  133. - maps, channels and slices created by calls to make();
  134. - allocations to construct an interface;
  135. - allocations caused by conversions, e.g. []byte(str).
  136. - arrays allocated by calls to append();
  137. Many objects have no Go types. For example, the func, map and chan type
  138. kinds in Go are all varieties of pointers, but their respective objects
  139. are actual functions (executable code), maps (hash tables), and channels
  140. (synchronized queues). Given the way we model interfaces, they too are
  141. pointers to "tagged" objects with no Go type. And an *ssa.Global denotes
  142. the address of a global variable, but the object for a Global is the
  143. actual data. So, the types of an ssa.Value that creates an object is
  144. "off by one indirection": a pointer to the object.
  145. The individual nodes of an object are sometimes referred to as "labels".
  146. For uniformity, all objects have a non-zero number of fields, even those
  147. of the empty type struct{}. (All arrays are treated as if of length 1,
  148. so there are no empty arrays. The empty tuple is never address-taken,
  149. so is never an object.)
  150. TAGGED OBJECTS
  151. An tagged object has the following layout:
  152. T -- obj.flags {otTagged}
  153. v
  154. ...
  155. The T node's typ field is the dynamic type of the "payload": the value
  156. v which follows, flattened out. The T node's obj has the otTagged
  157. flag.
  158. Tagged objects are needed when generalizing across types: interfaces,
  159. reflect.Values, reflect.Types. Each of these three types is modelled
  160. as a pointer that exclusively points to tagged objects.
  161. Tagged objects may be indirect (obj.flags {otIndirect}) meaning that
  162. the value v is not of type T but *T; this is used only for
  163. reflect.Values that represent lvalues. (These are not implemented yet.)
  164. ANALYSIS ABSTRACTION OF EACH TYPE
  165. Variables of the following "scalar" types may be represented by a
  166. single node: basic types, pointers, channels, maps, slices, 'func'
  167. pointers, interfaces.
  168. Pointers
  169. Nothing to say here, oddly.
  170. Basic types (bool, string, numbers, unsafe.Pointer)
  171. Currently all fields in the flattening of a type, including
  172. non-pointer basic types such as int, are represented in objects and
  173. values. Though non-pointer nodes within values are uninteresting,
  174. non-pointer nodes in objects may be useful (if address-taken)
  175. because they permit the analysis to deduce, in this example,
  176. var s struct{ ...; x int; ... }
  177. p := &s.x
  178. that p points to s.x. If we ignored such object fields, we could only
  179. say that p points somewhere within s.
  180. All other basic types are ignored. Expressions of these types have
  181. zero nodeid, and fields of these types within aggregate other types
  182. are omitted.
  183. unsafe.Pointers are not modelled as pointers, so a conversion of an
  184. unsafe.Pointer to *T is (unsoundly) treated equivalent to new(T).
  185. Channels
  186. An expression of type 'chan T' is a kind of pointer that points
  187. exclusively to channel objects, i.e. objects created by MakeChan (or
  188. reflection).
  189. 'chan T' is treated like *T.
  190. *ssa.MakeChan is treated as equivalent to new(T).
  191. *ssa.Send and receive (*ssa.UnOp(ARROW)) and are equivalent to store
  192. and load.
  193. Maps
  194. An expression of type 'map[K]V' is a kind of pointer that points
  195. exclusively to map objects, i.e. objects created by MakeMap (or
  196. reflection).
  197. map K[V] is treated like *M where M = struct{k K; v V}.
  198. *ssa.MakeMap is equivalent to new(M).
  199. *ssa.MapUpdate is equivalent to *y=x where *y and x have type M.
  200. *ssa.Lookup is equivalent to y=x.v where x has type *M.
  201. Slices
  202. A slice []T, which dynamically resembles a struct{array *T, len, cap int},
  203. is treated as if it were just a *T pointer; the len and cap fields are
  204. ignored.
  205. *ssa.MakeSlice is treated like new([1]T): an allocation of a
  206. singleton array.
  207. *ssa.Index on a slice is equivalent to a load.
  208. *ssa.IndexAddr on a slice returns the address of the sole element of the
  209. slice, i.e. the same address.
  210. *ssa.Slice is treated as a simple copy.
  211. Functions
  212. An expression of type 'func...' is a kind of pointer that points
  213. exclusively to function objects.
  214. A function object has the following layout:
  215. identity -- typ:*types.Signature; obj.flags {otFunction}
  216. params_0 -- (the receiver, if a method)
  217. ...
  218. params_n-1
  219. results_0
  220. ...
  221. results_m-1
  222. There may be multiple function objects for the same *ssa.Function
  223. due to context-sensitive treatment of some functions.
  224. The first node is the function's identity node.
  225. Associated with every callsite is a special "targets" variable,
  226. whose pts() contains the identity node of each function to which
  227. the call may dispatch. Identity words are not otherwise used during
  228. the analysis, but we construct the call graph from the pts()
  229. solution for such nodes.
  230. The following block of contiguous nodes represents the flattened-out
  231. types of the parameters ("P-block") and results ("R-block") of the
  232. function object.
  233. The treatment of free variables of closures (*ssa.FreeVar) is like
  234. that of global variables; it is not context-sensitive.
  235. *ssa.MakeClosure instructions create copy edges to Captures.
  236. A Go value of type 'func' (i.e. a pointer to one or more functions)
  237. is a pointer whose pts() contains function objects. The valueNode()
  238. for an *ssa.Function returns a singleton for that function.
  239. Interfaces
  240. An expression of type 'interface{...}' is a kind of pointer that
  241. points exclusively to tagged objects. All tagged objects pointed to
  242. by an interface are direct (the otIndirect flag is clear) and
  243. concrete (the tag type T is not itself an interface type). The
  244. associated ssa.Value for an interface's tagged objects may be an
  245. *ssa.MakeInterface instruction, or nil if the tagged object was
  246. created by an instrinsic (e.g. reflection).
  247. Constructing an interface value causes generation of constraints for
  248. all of the concrete type's methods; we can't tell a priori which
  249. ones may be called.
  250. TypeAssert y = x.(T) is implemented by a dynamic constraint
  251. triggered by each tagged object O added to pts(x): a typeFilter
  252. constraint if T is an interface type, or an untag constraint if T is
  253. a concrete type. A typeFilter tests whether O.typ implements T; if
  254. so, O is added to pts(y). An untagFilter tests whether O.typ is
  255. assignable to T,and if so, a copy edge O.v -> y is added.
  256. ChangeInterface is a simple copy because the representation of
  257. tagged objects is independent of the interface type (in contrast
  258. to the "method tables" approach used by the gc runtime).
  259. y := Invoke x.m(...) is implemented by allocating contiguous P/R
  260. blocks for the callsite and adding a dynamic rule triggered by each
  261. tagged object added to pts(x). The rule adds param/results copy
  262. edges to/from each discovered concrete method.
  263. (Q. Why do we model an interface as a pointer to a pair of type and
  264. value, rather than as a pair of a pointer to type and a pointer to
  265. value?
  266. A. Control-flow joins would merge interfaces ({T1}, {V1}) and ({T2},
  267. {V2}) to make ({T1,T2}, {V1,V2}), leading to the infeasible and
  268. type-unsafe combination (T1,V2). Treating the value and its concrete
  269. type as inseparable makes the analysis type-safe.)
  270. reflect.Value
  271. A reflect.Value is modelled very similar to an interface{}, i.e. as
  272. a pointer exclusively to tagged objects, but with two generalizations.
  273. 1) a reflect.Value that represents an lvalue points to an indirect
  274. (obj.flags {otIndirect}) tagged object, which has a similar
  275. layout to an tagged object except that the value is a pointer to
  276. the dynamic type. Indirect tagged objects preserve the correct
  277. aliasing so that mutations made by (reflect.Value).Set can be
  278. observed.
  279. Indirect objects only arise when an lvalue is derived from an
  280. rvalue by indirection, e.g. the following code:
  281. type S struct { X T }
  282. var s S
  283. var i interface{} = &s // i points to a *S-tagged object (from MakeInterface)
  284. v1 := reflect.ValueOf(i) // v1 points to same *S-tagged object as i
  285. v2 := v1.Elem() // v2 points to an indirect S-tagged object, pointing to s
  286. v3 := v2.FieldByName("X") // v3 points to an indirect int-tagged object, pointing to s.X
  287. v3.Set(y) // pts(s.X) ⊇ pts(y)
  288. Whether indirect or not, the concrete type of the tagged object
  289. corresponds to the user-visible dynamic type, and the existence
  290. of a pointer is an implementation detail.
  291. (NB: indirect tagged objects are not yet implemented)
  292. 2) The dynamic type tag of a tagged object pointed to by a
  293. reflect.Value may be an interface type; it need not be concrete.
  294. This arises in code such as this:
  295. tEface := reflect.TypeOf(new(interface{}).Elem() // interface{}
  296. eface := reflect.Zero(tEface)
  297. pts(eface) is a singleton containing an interface{}-tagged
  298. object. That tagged object's payload is an interface{} value,
  299. i.e. the pts of the payload contains only concrete-tagged
  300. objects, although in this example it's the zero interface{} value,
  301. so its pts is empty.
  302. reflect.Type
  303. Just as in the real "reflect" library, we represent a reflect.Type
  304. as an interface whose sole implementation is the concrete type,
  305. *reflect.rtype. (This choice is forced on us by go/types: clients
  306. cannot fabricate types with arbitrary method sets.)
  307. rtype instances are canonical: there is at most one per dynamic
  308. type. (rtypes are in fact large structs but since identity is all
  309. that matters, we represent them by a single node.)
  310. The payload of each *rtype-tagged object is an *rtype pointer that
  311. points to exactly one such canonical rtype object. We exploit this
  312. by setting the node.typ of the payload to the dynamic type, not
  313. '*rtype'. This saves us an indirection in each resolution rule. As
  314. an optimisation, *rtype-tagged objects are canonicalized too.
  315. Aggregate types:
  316. Aggregate types are treated as if all directly contained
  317. aggregates are recursively flattened out.
  318. Structs
  319. *ssa.Field y = x.f creates a simple edge to y from x's node at f's offset.
  320. *ssa.FieldAddr y = &x->f requires a dynamic closure rule to create
  321. simple edges for each struct discovered in pts(x).
  322. The nodes of a struct consist of a special 'identity' node (whose
  323. type is that of the struct itself), followed by the nodes for all
  324. the struct's fields, recursively flattened out. A pointer to the
  325. struct is a pointer to its identity node. That node allows us to
  326. distinguish a pointer to a struct from a pointer to its first field.
  327. Field offsets are logical field offsets (plus one for the identity
  328. node), so the sizes of the fields can be ignored by the analysis.
  329. (The identity node is non-traditional but enables the distinction
  330. described above, which is valuable for code comprehension tools.
  331. Typical pointer analyses for C, whose purpose is compiler
  332. optimization, must soundly model unsafe.Pointer (void*) conversions,
  333. and this requires fidelity to the actual memory layout using physical
  334. field offsets.)
  335. *ssa.Field y = x.f creates a simple edge to y from x's node at f's offset.
  336. *ssa.FieldAddr y = &x->f requires a dynamic closure rule to create
  337. simple edges for each struct discovered in pts(x).
  338. Arrays
  339. We model an array by an identity node (whose type is that of the
  340. array itself) followed by a node representing all the elements of
  341. the array; the analysis does not distinguish elements with different
  342. indices. Effectively, an array is treated like struct{elem T}, a
  343. load y=x[i] like y=x.elem, and a store x[i]=y like x.elem=y; the
  344. index i is ignored.
  345. A pointer to an array is pointer to its identity node. (A slice is
  346. also a pointer to an array's identity node.) The identity node
  347. allows us to distinguish a pointer to an array from a pointer to one
  348. of its elements, but it is rather costly because it introduces more
  349. offset constraints into the system. Furthermore, sound treatment of
  350. unsafe.Pointer would require us to dispense with this node.
  351. Arrays may be allocated by Alloc, by make([]T), by calls to append,
  352. and via reflection.
  353. Tuples (T, ...)
  354. Tuples are treated like structs with naturally numbered fields.
  355. *ssa.Extract is analogous to *ssa.Field.
  356. However, tuples have no identity field since by construction, they
  357. cannot be address-taken.
  358. FUNCTION CALLS
  359. There are three kinds of function call:
  360. (1) static "call"-mode calls of functions.
  361. (2) dynamic "call"-mode calls of functions.
  362. (3) dynamic "invoke"-mode calls of interface methods.
  363. Cases 1 and 2 apply equally to methods and standalone functions.
  364. Static calls.
  365. A static call consists three steps:
  366. - finding the function object of the callee;
  367. - creating copy edges from the actual parameter value nodes to the
  368. P-block in the function object (this includes the receiver if
  369. the callee is a method);
  370. - creating copy edges from the R-block in the function object to
  371. the value nodes for the result of the call.
  372. A static function call is little more than two struct value copies
  373. between the P/R blocks of caller and callee:
  374. callee.P = caller.P
  375. caller.R = callee.R
  376. Context sensitivity
  377. Static calls (alone) may be treated context sensitively,
  378. i.e. each callsite may cause a distinct re-analysis of the
  379. callee, improving precision. Our current context-sensitivity
  380. policy treats all intrinsics and getter/setter methods in this
  381. manner since such functions are small and seem like an obvious
  382. source of spurious confluences, though this has not yet been
  383. evaluated.
  384. Dynamic function calls
  385. Dynamic calls work in a similar manner except that the creation of
  386. copy edges occurs dynamically, in a similar fashion to a pair of
  387. struct copies in which the callee is indirect:
  388. callee->P = caller.P
  389. caller.R = callee->R
  390. (Recall that the function object's P- and R-blocks are contiguous.)
  391. Interface method invocation
  392. For invoke-mode calls, we create a params/results block for the
  393. callsite and attach a dynamic closure rule to the interface. For
  394. each new tagged object that flows to the interface, we look up
  395. the concrete method, find its function object, and connect its P/R
  396. blocks to the callsite's P/R blocks, adding copy edges to the graph
  397. during solving.
  398. Recording call targets
  399. The analysis notifies its clients of each callsite it encounters,
  400. passing a CallSite interface. Among other things, the CallSite
  401. contains a synthetic constraint variable ("targets") whose
  402. points-to solution includes the set of all function objects to
  403. which the call may dispatch.
  404. It is via this mechanism that the callgraph is made available.
  405. Clients may also elect to be notified of callgraph edges directly;
  406. internally this just iterates all "targets" variables' pts(·)s.
  407. PRESOLVER
  408. We implement Hash-Value Numbering (HVN), a pre-solver constraint
  409. optimization described in Hardekopf & Lin, SAS'07. This is documented
  410. in more detail in hvn.go. We intend to add its cousins HR and HU in
  411. future.
  412. SOLVER
  413. The solver is currently a naive Andersen-style implementation; it does
  414. not perform online cycle detection, though we plan to add solver
  415. optimisations such as Hybrid- and Lazy- Cycle Detection from (Hardekopf
  416. & Lin, PLDI'07).
  417. It uses difference propagation (Pearce et al, SQC'04) to avoid
  418. redundant re-triggering of closure rules for values already seen.
  419. Points-to sets are represented using sparse bit vectors (similar to
  420. those used in LLVM and gcc), which are more space- and time-efficient
  421. than sets based on Go's built-in map type or dense bit vectors.
  422. Nodes are permuted prior to solving so that object nodes (which may
  423. appear in points-to sets) are lower numbered than non-object (var)
  424. nodes. This improves the density of the set over which the PTSs
  425. range, and thus the efficiency of the representation.
  426. Partly thanks to avoiding map iteration, the execution of the solver is
  427. 100% deterministic, a great help during debugging.
  428. FURTHER READING
  429. Andersen, L. O. 1994. Program analysis and specialization for the C
  430. programming language. Ph.D. dissertation. DIKU, University of
  431. Copenhagen.
  432. David J. Pearce, Paul H. J. Kelly, and Chris Hankin. 2004. Efficient
  433. field-sensitive pointer analysis for C. In Proceedings of the 5th ACM
  434. SIGPLAN-SIGSOFT workshop on Program analysis for software tools and
  435. engineering (PASTE '04). ACM, New York, NY, USA, 37-42.
  436. http://doi.acm.org/10.1145/996821.996835
  437. David J. Pearce, Paul H. J. Kelly, and Chris Hankin. 2004. Online
  438. Cycle Detection and Difference Propagation: Applications to Pointer
  439. Analysis. Software Quality Control 12, 4 (December 2004), 311-337.
  440. http://dx.doi.org/10.1023/B:SQJO.0000039791.93071.a2
  441. David Grove and Craig Chambers. 2001. A framework for call graph
  442. construction algorithms. ACM Trans. Program. Lang. Syst. 23, 6
  443. (November 2001), 685-746.
  444. http://doi.acm.org/10.1145/506315.506316
  445. Ben Hardekopf and Calvin Lin. 2007. The ant and the grasshopper: fast
  446. and accurate pointer analysis for millions of lines of code. In
  447. Proceedings of the 2007 ACM SIGPLAN conference on Programming language
  448. design and implementation (PLDI '07). ACM, New York, NY, USA, 290-299.
  449. http://doi.acm.org/10.1145/1250734.1250767
  450. Ben Hardekopf and Calvin Lin. 2007. Exploiting pointer and location
  451. equivalence to optimize pointer analysis. In Proceedings of the 14th
  452. international conference on Static Analysis (SAS'07), Hanne Riis
  453. Nielson and Gilberto Filé (Eds.). Springer-Verlag, Berlin, Heidelberg,
  454. 265-280.
  455. Atanas Rountev and Satish Chandra. 2000. Off-line variable substitution
  456. for scaling points-to analysis. In Proceedings of the ACM SIGPLAN 2000
  457. conference on Programming language design and implementation (PLDI '00).
  458. ACM, New York, NY, USA, 47-56. DOI=10.1145/349299.349310
  459. http://doi.acm.org/10.1145/349299.349310
  460. */
  461. package pointer // import "golang.org/x/tools/go/pointer"