Dataset class for Sequel::DataObjects::Database objects.
ACTION_METHODS | = | %w'<< [] []= all avg count columns columns! delete each empty? fetch_rows first get import insert insert_multiple interval last map max min multi_insert range select_hash select_map select_order_map set single_record single_value sum to_csv to_hash truncate update'.map{|x| x.to_sym} | Action methods defined by Sequel that execute code on the database. |
Returns the first record matching the conditions. Examples:
ds[:id=>1] => {:id=1}
# File lib/sequel/dataset/actions.rb, line 25 25: def [](*conditions) 26: raise(Error, ARRAY_ACCESS_ERROR_MSG) if (conditions.length == 1 and conditions.first.is_a?(Integer)) or conditions.length == 0 27: first(*conditions) 28: end
Returns the average value for the given column.
# File lib/sequel/dataset/actions.rb, line 49 49: def avg(column) 50: aggregate_dataset.get{avg(column)} 51: end
Returns the columns in the result set in order. If the columns are currently cached, returns the cached value. Otherwise, a SELECT query is performed to get a single row. Adapters are expected to fill the columns cache with the column information when a query is performed. If the dataset does not have any rows, this may be an empty array depending on how the adapter is programmed.
If you are looking for all columns for a single table and maybe some information about each column (e.g. type), see Database#schema.
# File lib/sequel/dataset/actions.rb, line 62 62: def columns 63: return @columns if @columns 64: ds = unfiltered.unordered.clone(:distinct => nil, :limit => 1) 65: ds.each{break} 66: @columns = ds.instance_variable_get(:@columns) 67: @columns || [] 68: end
Returns the number of records in the dataset.
# File lib/sequel/dataset/actions.rb, line 78 78: def count 79: aggregate_dataset.get{COUNT(:*){}.as(count)}.to_i 80: end
Deletes the records in the dataset. The returned value is generally the number of records deleted, but that is adapter dependent. See delete_sql.
# File lib/sequel/dataset/actions.rb, line 84 84: def delete 85: execute_dui(delete_sql) 86: end
Iterates over the records in the dataset as they are yielded from the database adapter, and returns self.
Note that this method is not safe to use on many adapters if you are running additional queries inside the provided block. If you are running queries inside the block, you use should all instead of each.
# File lib/sequel/dataset/actions.rb, line 94 94: def each(&block) 95: if @opts[:graph] 96: graph_each(&block) 97: elsif row_proc = @row_proc 98: fetch_rows(select_sql){|r| yield row_proc.call(r)} 99: else 100: fetch_rows(select_sql, &block) 101: end 102: self 103: end
Returns true if no records exist in the dataset, false otherwise
# File lib/sequel/dataset/actions.rb, line 106 106: def empty? 107: get(1).nil? 108: end
If a integer argument is given, it is interpreted as a limit, and then returns all matching records up to that limit. If no argument is passed, it returns the first matching record. If any other type of argument(s) is passed, it is given to filter and the first matching record is returned. If a block is given, it is used to filter the dataset before returning anything. Examples:
ds.first => {:id=>7} ds.first(2) => [{:id=>6}, {:id=>4}] ds.order(:id).first(2) => [{:id=>1}, {:id=>2}] ds.first(:id=>2) => {:id=>2} ds.first("id = 3") => {:id=>3} ds.first("id = ?", 4) => {:id=>4} ds.first{|o| o.id > 2} => {:id=>5} ds.order(:id).first{|o| o.id > 2} => {:id=>3} ds.first{|o| o.id > 2} => {:id=>5} ds.first("id > ?", 4){|o| o.id < 6} => {:id=>5} ds.order(:id).first(2){|o| o.id < 2} => [{:id=>1}]
# File lib/sequel/dataset/actions.rb, line 135 135: def first(*args, &block) 136: ds = block ? filter(&block) : self 137: 138: if args.empty? 139: ds.single_record 140: else 141: args = (args.size == 1) ? args.first : args 142: if Integer === args 143: ds.limit(args).all 144: else 145: ds.filter(args).single_record 146: end 147: end 148: end
Return the column value for the first matching record in the dataset. Raises an error if both an argument and block is given.
ds.get(:id) ds.get{|o| o.sum(:id)}
# File lib/sequel/dataset/actions.rb, line 155 155: def get(column=nil, &block) 156: if column 157: raise(Error, ARG_BLOCK_ERROR_MSG) if block 158: select(column).single_value 159: else 160: select(&block).single_value 161: end 162: end
Inserts multiple records into the associated table. This method can be used to efficiently insert a large number of records into a table in a single query if the database supports it. Inserts are automatically wrapped in a transaction.
This method is called with a columns array and an array of value arrays:
dataset.import([:x, :y], [[1, 2], [3, 4]])
This method also accepts a dataset instead of an array of value arrays:
dataset.import([:x, :y], other_dataset.select(:a___x, :b___y))
The method also accepts a :slice or :commit_every option that specifies the number of records to insert per transaction. This is useful especially when inserting a large number of records, e.g.:
# this will commit every 50 records dataset.import([:x, :y], [[1, 2], [3, 4], ...], :slice => 50)
# File lib/sequel/dataset/actions.rb, line 183 183: def import(columns, values, opts={}) 184: return @db.transaction{insert(columns, values)} if values.is_a?(Dataset) 185: 186: return if values.empty? 187: raise(Error, IMPORT_ERROR_MSG) if columns.empty? 188: 189: if slice_size = opts[:commit_every] || opts[:slice] 190: offset = 0 191: loop do 192: @db.transaction(opts){multi_insert_sql(columns, values[offset, slice_size]).each{|st| execute_dui(st)}} 193: offset += slice_size 194: break if offset >= values.length 195: end 196: else 197: statements = multi_insert_sql(columns, values) 198: @db.transaction{statements.each{|st| execute_dui(st)}} 199: end 200: end
Inserts values into the associated table. The returned value is generally the value of the primary key for the inserted row, but that is adapter dependent. See insert_sql.
# File lib/sequel/dataset/actions.rb, line 205 205: def insert(*values) 206: execute_insert(insert_sql(*values)) 207: end
Inserts multiple values. If a block is given it is invoked for each item in the given array before inserting it. See multi_insert as a possible faster version that inserts multiple records in one SQL statement.
# File lib/sequel/dataset/actions.rb, line 213 213: def insert_multiple(array, &block) 214: if block 215: array.each {|i| insert(block[i])} 216: else 217: array.each {|i| insert(i)} 218: end 219: end
Reverses the order and then runs first. Note that this will not necessarily give you the last record in the dataset, unless you have an unambiguous order. If there is not currently an order for this dataset, raises an Error.
# File lib/sequel/dataset/actions.rb, line 231 231: def last(*args, &block) 232: raise(Error, 'No order specified') unless @opts[:order] 233: reverse.first(*args, &block) 234: end
Maps column values for each record in the dataset (if a column name is given), or performs the stock mapping functionality of Enumerable. Raises an error if both an argument and block are given. Examples:
ds.map(:id) => [1, 2, 3, ...] ds.map{|r| r[:id] * 2} => [2, 4, 6, ...]
# File lib/sequel/dataset/actions.rb, line 242 242: def map(column=nil, &block) 243: if column 244: raise(Error, ARG_BLOCK_ERROR_MSG) if block 245: super(){|r| r[column]} 246: else 247: super(&block) 248: end 249: end
Returns the maximum value for the given column.
# File lib/sequel/dataset/actions.rb, line 252 252: def max(column) 253: aggregate_dataset.get{max(column)} 254: end
Returns the minimum value for the given column.
# File lib/sequel/dataset/actions.rb, line 257 257: def min(column) 258: aggregate_dataset.get{min(column)} 259: end
This is a front end for import that allows you to submit an array of hashes instead of arrays of columns and values:
dataset.multi_insert([{:x => 1}, {:x => 2}])
Be aware that all hashes should have the same keys if you use this calling method, otherwise some columns could be missed or set to null instead of to default values.
You can also use the :slice or :commit_every option that import accepts.
# File lib/sequel/dataset/actions.rb, line 271 271: def multi_insert(hashes, opts={}) 272: return if hashes.empty? 273: columns = hashes.first.keys 274: import(columns, hashes.map{|h| columns.map{|c| h[c]}}, opts) 275: end
Returns a hash with key_column values as keys and value_column values as values. Similar to to_hash, but only selects the two columns.
# File lib/sequel/dataset/actions.rb, line 287 287: def select_hash(key_column, value_column) 288: select(key_column, value_column).to_hash(hash_key_symbol(key_column), hash_key_symbol(value_column)) 289: end
Selects the column given (either as an argument or as a block), and returns an array of all values of that column in the dataset. If you give a block argument that returns an array with multiple entries, the contents of the resulting array are undefined.
# File lib/sequel/dataset/actions.rb, line 295 295: def select_map(column=nil, &block) 296: ds = naked.ungraphed 297: ds = if column 298: raise(Error, ARG_BLOCK_ERROR_MSG) if block 299: ds.select(column) 300: else 301: ds.select(&block) 302: end 303: ds.map{|r| r.values.first} 304: end
The same as select_map, but in addition orders the array by the column.
# File lib/sequel/dataset/actions.rb, line 307 307: def select_order_map(column=nil, &block) 308: ds = naked.ungraphed 309: ds = if column 310: raise(Error, ARG_BLOCK_ERROR_MSG) if block 311: ds.select(column).order(unaliased_identifier(column)) 312: else 313: ds.select(&block).order(&block) 314: end 315: ds.map{|r| r.values.first} 316: end
Returns a string in CSV format containing the dataset records. By default the CSV representation includes the column titles in the first line. You can turn that off by passing false as the include_column_titles argument.
This does not use a CSV library or handle quoting of values in any way. If any values in any of the rows could include commas or line endings, you shouldn‘t use this.
# File lib/sequel/dataset/actions.rb, line 351 351: def to_csv(include_column_titles = true) 352: n = naked 353: cols = n.columns 354: csv = '' 355: csv << "#{cols.join(COMMA_SEPARATOR)}\r\n" if include_column_titles 356: n.each{|r| csv << "#{cols.collect{|c| r[c]}.join(COMMA_SEPARATOR)}\r\n"} 357: csv 358: end
Returns a hash with one column used as key and another used as value. If rows have duplicate values for the key column, the latter row(s) will overwrite the value of the previous row(s). If the value_column is not given or nil, uses the entire hash as the value.
# File lib/sequel/dataset/actions.rb, line 364 364: def to_hash(key_column, value_column = nil) 365: inject({}) do |m, r| 366: m[r[key_column]] = value_column ? r[value_column] : r 367: m 368: end 369: end
Truncates the dataset. Returns nil.
# File lib/sequel/dataset/actions.rb, line 372 372: def truncate 373: execute_ddl(truncate_sql) 374: end
Updates values for the dataset. The returned value is generally the number of rows updated, but that is adapter dependent. See update_sql.
# File lib/sequel/dataset/actions.rb, line 378 378: def update(values={}) 379: execute_dui(update_sql(values)) 380: end
These methods all return modified copies of the receiver.
COLUMN_CHANGE_OPTS | = | [:select, :sql, :from, :join].freeze | The dataset options that require the removal of cached columns if changed. | |
NON_SQL_OPTIONS | = | [:server, :defaults, :overrides, :graph, :eager_graph, :graph_aliases] | Which options don‘t affect the SQL generation. Used by simple_select_all? to determine if this is a simple SELECT * FROM table. | |
CONDITIONED_JOIN_TYPES | = | [:inner, :full_outer, :right_outer, :left_outer, :full, :right, :left] | These symbols have _join methods created (e.g. inner_join) that call join_table with the symbol, passing along the arguments and block from the method call. | |
UNCONDITIONED_JOIN_TYPES | = | [:natural, :natural_left, :natural_right, :natural_full, :cross] | These symbols have _join methods created (e.g. natural_join) that call join_table with the symbol. They only accept a single table argument which is passed to join_table, and they raise an error if called with a block. | |
JOIN_METHODS | = | (CONDITIONED_JOIN_TYPES + UNCONDITIONED_JOIN_TYPES).map{|x| "#{x}_join".to_sym} + [:join, :join_table] | All methods that return modified datasets with a joined table added. | |
QUERY_METHODS | = | %w'add_graph_aliases and distinct except exclude filter for_update from from_self graph grep group group_and_count group_by having intersect invert limit lock_style naked or order order_append order_by order_more order_prepend paginate qualify query reverse reverse_order select select_all select_append select_more server set_defaults set_graph_aliases set_overrides unfiltered ungraphed ungrouped union unlimited unordered where with with_recursive with_sql'.collect{|x| x.to_sym} + JOIN_METHODS | Methods that return modified datasets |
Adds an further filter to an existing filter using AND. If no filter exists an error is raised. This method is identical to filter except it expects an existing filter.
ds.filter(:a).and(:b) # SQL: WHERE a AND b
# File lib/sequel/dataset/query.rb, line 43 43: def and(*cond, &block) 44: raise(InvalidOperation, "No existing filter found.") unless @opts[:having] || @opts[:where] 45: filter(*cond, &block) 46: end
Returns a new clone of the dataset with with the given options merged. If the options changed include options in COLUMN_CHANGE_OPTS, the cached columns are deleted.
# File lib/sequel/dataset/query.rb, line 51 51: def clone(opts = {}) 52: c = super() 53: c.opts = @opts.merge(opts) 54: c.instance_variable_set(:@columns, nil) if opts.keys.any?{|o| COLUMN_CHANGE_OPTS.include?(o)} 55: c 56: end
Returns a copy of the dataset with the SQL DISTINCT clause. The DISTINCT clause is used to remove duplicate rows from the output. If arguments are provided, uses a DISTINCT ON clause, in which case it will only be distinct on those columns, instead of all returned columns. Raises an error if arguments are given and DISTINCT ON is not supported.
dataset.distinct # SQL: SELECT DISTINCT * FROM items dataset.order(:id).distinct(:id) # SQL: SELECT DISTINCT ON (id) * FROM items ORDER BY id
# File lib/sequel/dataset/query.rb, line 67 67: def distinct(*args) 68: raise(InvalidOperation, "DISTINCT ON not supported") if !args.empty? && !supports_distinct_on? 69: clone(:distinct => args) 70: end
Adds an EXCEPT clause using a second dataset object. An EXCEPT compound dataset returns all rows in the current dataset that are not in the given dataset. Raises an InvalidOperation if the operation is not supported. Options:
DB[:items].except(DB).sql #=> "SELECT * FROM items EXCEPT SELECT * FROM other_items"
# File lib/sequel/dataset/query.rb, line 82 82: def except(dataset, opts={}) 83: opts = {:all=>opts} unless opts.is_a?(Hash) 84: raise(InvalidOperation, "EXCEPT not supported") unless supports_intersect_except? 85: raise(InvalidOperation, "EXCEPT ALL not supported") if opts[:all] && !supports_intersect_except_all? 86: compound_clone(:except, dataset, opts) 87: end
Performs the inverse of Dataset#filter.
dataset.exclude(:category => 'software').sql #=> "SELECT * FROM items WHERE (category != 'software')"
# File lib/sequel/dataset/query.rb, line 93 93: def exclude(*cond, &block) 94: clause = (@opts[:having] ? :having : :where) 95: cond = cond.first if cond.size == 1 96: cond = filter_expr(cond, &block) 97: cond = SQL::BooleanExpression.invert(cond) 98: cond = SQL::BooleanExpression.new(:AND, @opts[clause], cond) if @opts[clause] 99: clone(clause => cond) 100: end
Returns a copy of the dataset with the given conditions imposed upon it. If the query already has a HAVING clause, then the conditions are imposed in the HAVING clause. If not, then they are imposed in the WHERE clause.
filter accepts the following argument types:
filter also takes a block, which should return one of the above argument types, and is treated the same way. This block yields a virtual row object, which is easy to use to create identifiers and functions. For more details on the virtual row support, see the "Virtual Rows" guide
If both a block and regular argument are provided, they get ANDed together.
Examples:
dataset.filter(:id => 3).sql #=> "SELECT * FROM items WHERE (id = 3)" dataset.filter('price < ?', 100).sql #=> "SELECT * FROM items WHERE price < 100" dataset.filter([[:id, (1,2,3)], [:id, 0..10]]).sql #=> "SELECT * FROM items WHERE ((id IN (1, 2, 3)) AND ((id >= 0) AND (id <= 10)))" dataset.filter('price < 100').sql #=> "SELECT * FROM items WHERE price < 100" dataset.filter(:active).sql #=> "SELECT * FROM items WHERE :active dataset.filter{|o| o.price < 100}.sql #=> "SELECT * FROM items WHERE (price < 100)"
Multiple filter calls can be chained for scoping:
software = dataset.filter(:category => 'software') software.filter{|o| o.price < 100}.sql #=> "SELECT * FROM items WHERE ((category = 'software') AND (price < 100))"
See the the "Dataset Filtering" guide for more examples and details.
# File lib/sequel/dataset/query.rb, line 150 150: def filter(*cond, &block) 151: _filter(@opts[:having] ? :having : :where, *cond, &block) 152: end
Returns a copy of the dataset with the source changed.
dataset.from # SQL: SELECT * dataset.from(:blah) # SQL: SELECT * FROM blah dataset.from(:blah, :foo) # SQL: SELECT * FROM blah, foo
# File lib/sequel/dataset/query.rb, line 164 164: def from(*source) 165: table_alias_num = 0 166: sources = [] 167: source.each do |s| 168: case s 169: when Hash 170: s.each{|k,v| sources << SQL::AliasedExpression.new(k,v)} 171: when Dataset 172: sources << SQL::AliasedExpression.new(s, dataset_alias(table_alias_num+=1)) 173: when Symbol 174: sch, table, aliaz = split_symbol(s) 175: if aliaz 176: s = sch ? SQL::QualifiedIdentifier.new(sch.to_sym, table.to_sym) : SQL::Identifier.new(table.to_sym) 177: sources << SQL::AliasedExpression.new(s, aliaz.to_sym) 178: else 179: sources << s 180: end 181: else 182: sources << s 183: end 184: end 185: o = {:from=>sources.empty? ? nil : sources} 186: o[:num_dataset_sources] = table_alias_num if table_alias_num > 0 187: clone(o) 188: end
Returns a dataset selecting from the current dataset. Supplying the :alias option controls the name of the result.
ds = DB[:items].order(:name).select(:id, :name) ds.sql #=> "SELECT id,name FROM items ORDER BY name" ds.from_self.sql #=> "SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS t1" ds.from_self(:alias=>:foo).sql #=> "SELECT * FROM (SELECT id, name FROM items ORDER BY name) AS foo"
# File lib/sequel/dataset/query.rb, line 197 197: def from_self(opts={}) 198: fs = {} 199: @opts.keys.each{|k| fs[k] = nil unless NON_SQL_OPTIONS.include?(k)} 200: clone(fs).from(opts[:alias] ? as(opts[:alias]) : self) 201: end
Pattern match any of the columns to any of the terms. The terms can be strings (which use LIKE) or regular expressions (which are only supported in some databases). See Sequel::SQL::StringExpression.like. Note that the total number of pattern matches will be cols.length * terms.length, which could cause performance issues.
dataset.grep(:a, '%test%') # SQL: SELECT * FROM items WHERE a LIKE '%test%' dataset.grep([:a, :b], %w'%test% foo') # SQL: SELECT * FROM items WHERE a LIKE '%test%' OR a LIKE 'foo' OR b LIKE '%test%' OR b LIKE 'foo'
# File lib/sequel/dataset/query.rb, line 211 211: def grep(cols, terms) 212: filter(SQL::BooleanExpression.new(:OR, *Array(cols).collect{|c| SQL::StringExpression.like(c, *terms)})) 213: end
Returns a copy of the dataset with the results grouped by the value of the given columns.
dataset.group(:id) # SELECT * FROM items GROUP BY id dataset.group(:id, :name) # SELECT * FROM items GROUP BY id, name
# File lib/sequel/dataset/query.rb, line 220 220: def group(*columns) 221: clone(:group => (columns.compact.empty? ? nil : columns)) 222: end
Returns a dataset grouped by the given column with count by group, order by the count of records. Column aliases may be supplied, and will be included in the select clause.
Examples:
ds.group_and_count(:name).all => [{:name=>'a', :count=>1}, ...] ds.group_and_count(:first_name, :last_name).all => [{:first_name=>'a', :last_name=>'b', :count=>1}, ...] ds.group_and_count(:first_name___name).all => [{:name=>'a', :count=>1}, ...]
# File lib/sequel/dataset/query.rb, line 238 238: def group_and_count(*columns) 239: group(*columns.map{|c| unaliased_identifier(c)}).select(*(columns + [COUNT_OF_ALL_AS_COUNT])) 240: end
Returns a copy of the dataset with the HAVING conditions changed. See filter for argument types.
dataset.group(:sum).having(:sum=>10) # SQL: SELECT * FROM items GROUP BY sum HAVING sum = 10
# File lib/sequel/dataset/query.rb, line 245 245: def having(*cond, &block) 246: _filter(:having, *cond, &block) 247: end
Adds an INTERSECT clause using a second dataset object. An INTERSECT compound dataset returns all rows in both the current dataset and the given dataset. Raises an InvalidOperation if the operation is not supported. Options:
DB[:items].intersect(DB).sql #=> "SELECT * FROM items INTERSECT SELECT * FROM other_items"
# File lib/sequel/dataset/query.rb, line 259 259: def intersect(dataset, opts={}) 260: opts = {:all=>opts} unless opts.is_a?(Hash) 261: raise(InvalidOperation, "INTERSECT not supported") unless supports_intersect_except? 262: raise(InvalidOperation, "INTERSECT ALL not supported") if opts[:all] && !supports_intersect_except_all? 263: compound_clone(:intersect, dataset, opts) 264: end
Inverts the current filter
dataset.filter(:category => 'software').invert.sql #=> "SELECT * FROM items WHERE (category != 'software')"
# File lib/sequel/dataset/query.rb, line 270 270: def invert 271: having, where = @opts[:having], @opts[:where] 272: raise(Error, "No current filter") unless having || where 273: o = {} 274: o[:having] = SQL::BooleanExpression.invert(having) if having 275: o[:where] = SQL::BooleanExpression.invert(where) if where 276: clone(o) 277: end
Alias of inner_join
# File lib/sequel/dataset/query.rb, line 280 280: def join(*args, &block) 281: inner_join(*args, &block) 282: end
Returns a joined dataset. Uses the following arguments:
# File lib/sequel/dataset/query.rb, line 314 314: def join_table(type, table, expr=nil, options={}, &block) 315: using_join = expr.is_a?(Array) && !expr.empty? && expr.all?{|x| x.is_a?(Symbol)} 316: if using_join && !supports_join_using? 317: h = {} 318: expr.each{|s| h[s] = s} 319: return join_table(type, table, h, options) 320: end 321: 322: case options 323: when Hash 324: table_alias = options[:table_alias] 325: last_alias = options[:implicit_qualifier] 326: when Symbol, String, SQL::Identifier 327: table_alias = options 328: last_alias = nil 329: else 330: raise Error, "invalid options format for join_table: #{options.inspect}" 331: end 332: 333: if Dataset === table 334: if table_alias.nil? 335: table_alias_num = (@opts[:num_dataset_sources] || 0) + 1 336: table_alias = dataset_alias(table_alias_num) 337: end 338: table_name = table_alias 339: else 340: table = table.table_name if table.respond_to?(:table_name) 341: table_name = table_alias || table 342: end 343: 344: join = if expr.nil? and !block_given? 345: SQL::JoinClause.new(type, table, table_alias) 346: elsif using_join 347: raise(Sequel::Error, "can't use a block if providing an array of symbols as expr") if block_given? 348: SQL::JoinUsingClause.new(expr, type, table, table_alias) 349: else 350: last_alias ||= @opts[:last_joined_table] || first_source_alias 351: if Sequel.condition_specifier?(expr) 352: expr = expr.collect do |k, v| 353: k = qualified_column_name(k, table_name) if k.is_a?(Symbol) 354: v = qualified_column_name(v, last_alias) if v.is_a?(Symbol) 355: [k,v] 356: end 357: end 358: if block_given? 359: expr2 = yield(table_name, last_alias, @opts[:join] || []) 360: expr = expr ? SQL::BooleanExpression.new(:AND, expr, expr2) : expr2 361: end 362: SQL::JoinOnClause.new(expr, type, table, table_alias) 363: end 364: 365: opts = {:join => (@opts[:join] || []) + [join], :last_joined_table => table_name} 366: opts[:num_dataset_sources] = table_alias_num if table_alias_num 367: clone(opts) 368: end
If given an integer, the dataset will contain only the first l results. If given a range, it will contain only those at offsets within that range. If a second argument is given, it is used as an offset.
dataset.limit(10) # SQL: SELECT * FROM items LIMIT 10 dataset.limit(10, 20) # SQL: SELECT * FROM items LIMIT 10 OFFSET 20
# File lib/sequel/dataset/query.rb, line 383 383: def limit(l, o = nil) 384: return from_self.limit(l, o) if @opts[:sql] 385: 386: if Range === l 387: o = l.first 388: l = l.last - l.first + (l.exclude_end? ? 0 : 1) 389: end 390: l = l.to_i if l.is_a?(String) && !l.is_a?(LiteralString) 391: if l.is_a?(Integer) 392: raise(Error, 'Limits must be greater than or equal to 1') unless l >= 1 393: end 394: opts = {:limit => l} 395: if o 396: o = o.to_i if o.is_a?(String) && !o.is_a?(LiteralString) 397: if o.is_a?(Integer) 398: raise(Error, 'Offsets must be greater than or equal to 0') unless o >= 0 399: end 400: opts[:offset] = o 401: end 402: clone(opts) 403: end
Returns a cloned dataset with the given lock style. If style is a string, it will be used directly. Otherwise, a symbol may be used for database independent locking. Currently :update is respected by most databases, and :share is supported by some.
# File lib/sequel/dataset/query.rb, line 409 409: def lock_style(style) 410: clone(:lock => style) 411: end
Adds an alternate filter to an existing filter using OR. If no filter exists an error is raised.
dataset.filter(:a).or(:b) # SQL: SELECT * FROM items WHERE a OR b
# File lib/sequel/dataset/query.rb, line 425 425: def or(*cond, &block) 426: clause = (@opts[:having] ? :having : :where) 427: raise(InvalidOperation, "No existing filter found.") unless @opts[clause] 428: cond = cond.first if cond.size == 1 429: clone(clause => SQL::BooleanExpression.new(:OR, @opts[clause], filter_expr(cond, &block))) 430: end
Returns a copy of the dataset with the order changed. If a nil is given the returned dataset has no order. This can accept multiple arguments of varying kinds, and even SQL functions. If a block is given, it is treated as a virtual row block, similar to filter.
ds.order(:name).sql #=> 'SELECT * FROM items ORDER BY name' ds.order(:a, :b).sql #=> 'SELECT * FROM items ORDER BY a, b' ds.order('a + b'.lit).sql #=> 'SELECT * FROM items ORDER BY a + b' ds.order(:a + :b).sql #=> 'SELECT * FROM items ORDER BY (a + b)' ds.order(:name.desc).sql #=> 'SELECT * FROM items ORDER BY name DESC' ds.order(:name.asc).sql #=> 'SELECT * FROM items ORDER BY name ASC' ds.order{|o| o.sum(:name)}.sql #=> 'SELECT * FROM items ORDER BY sum(name)' ds.order(nil).sql #=> 'SELECT * FROM items'
# File lib/sequel/dataset/query.rb, line 445 445: def order(*columns, &block) 446: columns += Array(Sequel.virtual_row(&block)) if block 447: clone(:order => (columns.compact.empty?) ? nil : columns) 448: end
Alias of order_more, for naming consistency with order_prepend.
# File lib/sequel/dataset/query.rb, line 451 451: def order_append(*columns, &block) 452: order_more(*columns, &block) 453: end
Returns a copy of the dataset with the order columns added to the end of the existing order.
ds.order(:a).order(:b).sql #=> 'SELECT * FROM items ORDER BY b' ds.order(:a).order_more(:b).sql #=> 'SELECT * FROM items ORDER BY a, b'
# File lib/sequel/dataset/query.rb, line 465 465: def order_more(*columns, &block) 466: columns = @opts[:order] + columns if @opts[:order] 467: order(*columns, &block) 468: end
Returns a copy of the dataset with the order columns added to the beginning of the existing order.
ds.order(:a).order(:b).sql #=> 'SELECT * FROM items ORDER BY b' ds.order(:a).order_prepend(:b).sql #=> 'SELECT * FROM items ORDER BY b, a'
# File lib/sequel/dataset/query.rb, line 475 475: def order_prepend(*columns, &block) 476: ds = order(*columns, &block) 477: @opts[:order] ? ds.order_more(*@opts[:order]) : ds 478: end
Return a copy of the dataset with unqualified identifiers in the SELECT, WHERE, GROUP, HAVING, and ORDER clauses qualified by the given table. If no columns are currently selected, select all columns of the given table.
# File lib/sequel/dataset/query.rb, line 489 489: def qualify_to(table) 490: o = @opts 491: return clone if o[:sql] 492: h = {} 493: (o.keys & QUALIFY_KEYS).each do |k| 494: h[k] = qualified_expression(o[k], table) 495: end 496: h[:select] = [SQL::ColumnAll.new(table)] if !o[:select] || o[:select].empty? 497: clone(h) 498: end
Qualify the dataset to its current first source. This is useful if you have unqualified identifiers in the query that all refer to the first source, and you want to join to another table which has columns with the same name as columns in the current dataset. See qualify_to.
# File lib/sequel/dataset/query.rb, line 505 505: def qualify_to_first_source 506: qualify_to(first_source) 507: end
Returns a copy of the dataset with the columns selected changed to the given columns. This also takes a virtual row block, similar to filter.
dataset.select(:a) # SELECT a FROM items dataset.select(:a, :b) # SELECT a, b FROM items dataset.select{|o| [o.a, o.sum(:b)]} # SELECT a, sum(b) FROM items
# File lib/sequel/dataset/query.rb, line 527 527: def select(*columns, &block) 528: columns += Array(Sequel.virtual_row(&block)) if block 529: m = [] 530: columns.map do |i| 531: i.is_a?(Hash) ? m.concat(i.map{|k, v| SQL::AliasedExpression.new(k,v)}) : m << i 532: end 533: clone(:select => m) 534: end
Returns a copy of the dataset selecting the wildcard.
dataset.select(:a).select_all # SELECT * FROM items
# File lib/sequel/dataset/query.rb, line 539 539: def select_all 540: clone(:select => nil) 541: end
Returns a copy of the dataset with the given columns added to the existing selected columns. If no columns are currently selected it will select the columns given in addition to *.
dataset.select(:a).select(:b) # SELECT b FROM items dataset.select(:a).select_append(:b) # SELECT a, b FROM items dataset.select_append(:b) # SELECT *, b FROM items
# File lib/sequel/dataset/query.rb, line 550 550: def select_append(*columns, &block) 551: cur_sel = @opts[:select] 552: cur_sel = [WILDCARD] if !cur_sel || cur_sel.empty? 553: select(*(cur_sel + columns), &block) 554: end
Returns a copy of the dataset with the given columns added to the existing selected columns. If no columns are currently selected it will just select the columns given.
dataset.select(:a).select(:b) # SELECT b FROM items dataset.select(:a).select_more(:b) # SELECT a, b FROM items dataset.select_more(:b) # SELECT b FROM items
# File lib/sequel/dataset/query.rb, line 563 563: def select_more(*columns, &block) 564: columns = @opts[:select] + columns if @opts[:select] 565: select(*columns, &block) 566: end
Set the server for this dataset to use. Used to pick a specific database shard to run a query against, or to override the default (which is SELECT uses :read_only database and all other queries use the :default database).
# File lib/sequel/dataset/query.rb, line 571 571: def server(servr) 572: clone(:server=>servr) 573: end
Adds a UNION clause using a second dataset object. A UNION compound dataset returns all rows in either the current dataset or the given dataset. Options:
DB[:items].union(DB).sql #=> "SELECT * FROM items UNION SELECT * FROM other_items"
# File lib/sequel/dataset/query.rb, line 610 610: def union(dataset, opts={}) 611: opts = {:all=>opts} unless opts.is_a?(Hash) 612: compound_clone(:union, dataset, opts) 613: end
Add a condition to the WHERE clause. See filter for argument types.
dataset.group(:a).having(:a).filter(:b) # SELECT * FROM items GROUP BY a HAVING a AND b dataset.group(:a).having(:a).where(:b) # SELECT * FROM items WHERE b GROUP BY a HAVING a
# File lib/sequel/dataset/query.rb, line 633 633: def where(*cond, &block) 634: _filter(:where, *cond, &block) 635: end
Add a simple common table expression (CTE) with the given name and a dataset that defines the CTE. A common table expression acts as an inline view for the query. Options:
# File lib/sequel/dataset/query.rb, line 642 642: def with(name, dataset, opts={}) 643: raise(Error, 'This datatset does not support common table expressions') unless supports_cte? 644: clone(:with=>(@opts[:with]||[]) + [opts.merge(:name=>name, :dataset=>dataset)]) 645: end
Add a recursive common table expression (CTE) with the given name, a dataset that defines the nonrecursive part of the CTE, and a dataset that defines the recursive part of the CTE. Options:
# File lib/sequel/dataset/query.rb, line 652 652: def with_recursive(name, nonrecursive, recursive, opts={}) 653: raise(Error, 'This datatset does not support common table expressions') unless supports_cte? 654: clone(:with=>(@opts[:with]||[]) + [opts.merge(:recursive=>true, :name=>name, :dataset=>nonrecursive.union(recursive, {:all=>opts[:union_all] != false, :from_self=>false}))]) 655: end
Returns a copy of the dataset with the static SQL used. This is useful if you want to keep the same row_proc/graph, but change the SQL used to custom SQL.
dataset.with_sql('SELECT * FROM foo') # SELECT * FROM foo
# File lib/sequel/dataset/query.rb, line 661 661: def with_sql(sql, *args) 662: sql = SQL::PlaceholderLiteralString.new(sql, args) unless args.empty? 663: clone(:sql=>sql) 664: end
Return true if the dataset has a non-nil value for any key in opts.
# File lib/sequel/dataset/query.rb, line 669 669: def options_overlap(opts) 670: !(@opts.collect{|k,v| k unless v.nil?}.compact & opts).empty? 671: end
These methods all return booleans, with most describing whether or not the dataset supports a feature.
WITH_SUPPORTED | = | :select_with_sql | Method used to check if WITH is supported |
Whether this dataset quotes identifiers.
# File lib/sequel/dataset/features.rb, line 13 13: def quote_identifiers? 14: @quote_identifiers 15: end
Whether the dataset supports common table expressions (the WITH clause).
# File lib/sequel/dataset/features.rb, line 31 31: def supports_cte? 32: select_clause_methods.include?(WITH_SUPPORTED) 33: end
Whether the dataset supports the DISTINCT ON clause, false by default.
# File lib/sequel/dataset/features.rb, line 36 36: def supports_distinct_on? 37: false 38: end
Whether the dataset supports the IS TRUE syntax.
# File lib/sequel/dataset/features.rb, line 51 51: def supports_is_true? 52: true 53: end
Whether the dataset supports the JOIN table USING (column1, …) syntax.
# File lib/sequel/dataset/features.rb, line 56 56: def supports_join_using? 57: true 58: end
Whether modifying joined datasets is supported.
# File lib/sequel/dataset/features.rb, line 61 61: def supports_modifying_joins? 62: false 63: end
These methods don‘t fit cleanly into another section.
NOTIMPL_MSG | = | "This method must be overridden in Sequel adapters".freeze |
ARRAY_ACCESS_ERROR_MSG | = | 'You cannot call Dataset#[] with an integer or with no arguments.'.freeze |
ARG_BLOCK_ERROR_MSG | = | 'Must use either an argument or a block, not both'.freeze |
IMPORT_ERROR_MSG | = | 'Using Sequel::Dataset#import an empty column array is not allowed'.freeze |
db | [RW] | The database that corresponds to this dataset |
opts | [RW] | The hash of options for this dataset, keys are symbols. |
Constructs a new Dataset instance with an associated database and options. Datasets are usually constructed by invoking the Database#[] method:
DB[:posts]
Sequel::Dataset is an abstract class that is not useful by itself. Each database adaptor should provide a subclass of Sequel::Dataset, and have the Database#dataset method return an instance of that class.
# File lib/sequel/dataset/misc.rb, line 27 27: def initialize(db, opts = nil) 28: @db = db 29: @quote_identifiers = db.quote_identifiers? if db.respond_to?(:quote_identifiers?) 30: @identifier_input_method = db.identifier_input_method if db.respond_to?(:identifier_input_method) 31: @identifier_output_method = db.identifier_output_method if db.respond_to?(:identifier_output_method) 32: @opts = opts || {} 33: @row_proc = nil 34: end
Yield a dataset for each server in the connection pool that is tied to that server. Intended for use in sharded environments where all servers need to be modified with the same data:
DB[:configs].where(:key=>'setting').each_server{|ds| ds.update(:value=>'new_value')}
# File lib/sequel/dataset/misc.rb, line 48 48: def each_server 49: db.servers.each{|s| yield server(s)} 50: end
Alias of first_source_alias
# File lib/sequel/dataset/misc.rb, line 53 53: def first_source 54: first_source_alias 55: end
The first source (primary table) for this dataset. If the dataset doesn‘t have a table, raises an error. If the table is aliased, returns the aliased name.
# File lib/sequel/dataset/misc.rb, line 59 59: def first_source_alias 60: source = @opts[:from] 61: if source.nil? || source.empty? 62: raise Error, 'No source specified for query' 63: end 64: case s = source.first 65: when SQL::AliasedExpression 66: s.aliaz 67: when Symbol 68: sch, table, aliaz = split_symbol(s) 69: aliaz ? aliaz.to_sym : s 70: else 71: s 72: end 73: end
The first source (primary table) for this dataset. If the dataset doesn‘t have a table, raises an error. If the table is aliased, returns the original table, not the alias
# File lib/sequel/dataset/misc.rb, line 78 78: def first_source_table 79: source = @opts[:from] 80: if source.nil? || source.empty? 81: raise Error, 'No source specified for query' 82: end 83: case s = source.first 84: when SQL::AliasedExpression 85: s.expression 86: when Symbol 87: sch, table, aliaz = split_symbol(s) 88: aliaz ? (sch ? SQL::QualifiedIdentifier.new(sch, table) : table.to_sym) : s 89: else 90: s 91: end 92: end
Creates a unique table alias that hasn‘t already been used in the dataset. table_alias can be any type of object accepted by alias_symbol. The symbol returned will be the implicit alias in the argument, possibly appended with "_N" if the implicit alias has already been used, where N is an integer starting at 0 and increasing until an unused one is found.
# File lib/sequel/dataset/misc.rb, line 106 106: def unused_table_alias(table_alias) 107: table_alias = alias_symbol(table_alias) 108: used_aliases = [] 109: used_aliases += opts[:from].map{|t| alias_symbol(t)} if opts[:from] 110: used_aliases += opts[:join].map{|j| j.table_alias ? alias_alias_symbol(j.table_alias) : alias_symbol(j.table)} if opts[:join] 111: if used_aliases.include?(table_alias) 112: i = 0 113: loop do 114: ta = "#{table_alias}_#{i}""#{table_alias}_#{i}" 115: return ta unless used_aliases.include?(ta) 116: i += 1 117: end 118: else 119: table_alias 120: end 121: end
Formats a DELETE statement using the given options and dataset options.
dataset.filter{|o| o.price >= 100}.delete_sql #=> "DELETE FROM items WHERE (price >= 100)"
# File lib/sequel/dataset/sql.rb, line 12 12: def delete_sql 13: return static_sql(opts[:sql]) if opts[:sql] 14: check_modification_allowed! 15: clause_sql(:delete) 16: end
Returns an EXISTS clause for the dataset as a LiteralString.
DB.select(1).where(DB[:items].exists).sql #=> "SELECT 1 WHERE (EXISTS (SELECT * FROM items))"
# File lib/sequel/dataset/sql.rb, line 22 22: def exists 23: LiteralString.new("EXISTS (#{select_sql})") 24: end
Formats an INSERT statement using the given values. The API is a little complex, and best explained by example:
# Default values DB[:items].insert_sql #=> 'INSERT INTO items DEFAULT VALUES' DB[:items].insert_sql({}) #=> 'INSERT INTO items DEFAULT VALUES' # Values without columns DB[:items].insert_sql(1,2,3) #=> 'INSERT INTO items VALUES (1, 2, 3)' DB[:items].insert_sql([1,2,3]) #=> 'INSERT INTO items VALUES (1, 2, 3)' # Values with columns DB[:items].insert_sql([:a, :b], [1,2]) #=> 'INSERT INTO items (a, b) VALUES (1, 2)' DB[:items].insert_sql(:a => 1, :b => 2) #=> 'INSERT INTO items (a, b) VALUES (1, 2)' # Using a subselect DB[:items].insert_sql(DB[:old_items]) #=> 'INSERT INTO items SELECT * FROM old_items # Using a subselect with columns DB[:items].insert_sql([:a, :b], DB[:old_items]) #=> 'INSERT INTO items (a, b) SELECT * FROM old_items
# File lib/sequel/dataset/sql.rb, line 42 42: def insert_sql(*values) 43: return static_sql(@opts[:sql]) if @opts[:sql] 44: 45: check_modification_allowed! 46: 47: columns = [] 48: 49: case values.size 50: when 0 51: return insert_sql({}) 52: when 1 53: case vals = values.at(0) 54: when Hash 55: vals = @opts[:defaults].merge(vals) if @opts[:defaults] 56: vals = vals.merge(@opts[:overrides]) if @opts[:overrides] 57: values = [] 58: vals.each do |k,v| 59: columns << k 60: values << v 61: end 62: when Dataset, Array, LiteralString 63: values = vals 64: else 65: if vals.respond_to?(:values) && (v = vals.values).is_a?(Hash) 66: return insert_sql(v) 67: end 68: end 69: when 2 70: if (v0 = values.at(0)).is_a?(Array) && ((v1 = values.at(1)).is_a?(Array) || v1.is_a?(Dataset) || v1.is_a?(LiteralString)) 71: columns, values = v0, v1 72: raise(Error, "Different number of values and columns given to insert_sql") if values.is_a?(Array) and columns.length != values.length 73: end 74: end 75: 76: columns = columns.map{|k| literal(String === k ? k.to_sym : k)} 77: clone(:columns=>columns, :values=>values)._insert_sql 78: end
Returns a literal representation of a value to be used as part of an SQL expression.
dataset.literal("abc'def\\") #=> "'abc''def\\\\'" dataset.literal(:items__id) #=> "items.id" dataset.literal([1, 2, 3]) => "(1, 2, 3)" dataset.literal(DB[:items]) => "(SELECT * FROM items)" dataset.literal(:x + 1 > :y) => "((x + 1) > y)"
If an unsupported object is given, an exception is raised.
# File lib/sequel/dataset/sql.rb, line 90 90: def literal(v) 91: case v 92: when String 93: return v if v.is_a?(LiteralString) 94: v.is_a?(SQL::Blob) ? literal_blob(v) : literal_string(v) 95: when Symbol 96: literal_symbol(v) 97: when Integer 98: literal_integer(v) 99: when Hash 100: literal_hash(v) 101: when SQL::Expression 102: literal_expression(v) 103: when Float 104: literal_float(v) 105: when BigDecimal 106: literal_big_decimal(v) 107: when NilClass 108: literal_nil 109: when TrueClass 110: literal_true 111: when FalseClass 112: literal_false 113: when Array 114: literal_array(v) 115: when Time 116: literal_time(v) 117: when DateTime 118: literal_datetime(v) 119: when Date 120: literal_date(v) 121: when Dataset 122: literal_dataset(v) 123: else 124: literal_other(v) 125: end 126: end
Returns an array of insert statements for inserting multiple records. This method is used by multi_insert to format insert statements and expects a keys array and and an array of value arrays.
This method should be overridden by descendants if the support inserting multiple records in a single SQL statement.
# File lib/sequel/dataset/sql.rb, line 134 134: def multi_insert_sql(columns, values) 135: values.map{|r| insert_sql(columns, r)} 136: end
Formats a SELECT statement
dataset.select_sql # => "SELECT * FROM items"
# File lib/sequel/dataset/sql.rb, line 141 141: def select_sql 142: return static_sql(@opts[:sql]) if @opts[:sql] 143: clause_sql(:select) 144: end
Same as select_sql, not aliased directly to make subclassing simpler.
# File lib/sequel/dataset/sql.rb, line 147 147: def sql 148: select_sql 149: end
SQL query to truncate the table
# File lib/sequel/dataset/sql.rb, line 152 152: def truncate_sql 153: if opts[:sql] 154: static_sql(opts[:sql]) 155: else 156: check_modification_allowed! 157: raise(InvalidOperation, "Can't truncate filtered datasets") if opts[:where] 158: _truncate_sql(source_list(opts[:from])) 159: end 160: end
Formats an UPDATE statement using the given values.
dataset.update_sql(:price => 100, :category => 'software') #=> "UPDATE items SET price = 100, category = 'software'"
Raises an error if the dataset is grouped or includes more than one table.
# File lib/sequel/dataset/sql.rb, line 169 169: def update_sql(values = {}) 170: return static_sql(opts[:sql]) if opts[:sql] 171: check_modification_allowed! 172: clone(:values=>values)._update_sql 173: end
These methods, while public, are not designed to be used directly by the end user.
AND_SEPARATOR | = | " AND ".freeze |
BOOL_FALSE | = | "'f'".freeze |
BOOL_TRUE | = | "'t'".freeze |
COMMA_SEPARATOR | = | ', '.freeze |
COLUMN_REF_RE1 | = | /\A([\w ]+)__([\w ]+)___([\w ]+)\z/.freeze |
COLUMN_REF_RE2 | = | /\A([\w ]+)___([\w ]+)\z/.freeze |
COLUMN_REF_RE3 | = | /\A([\w ]+)__([\w ]+)\z/.freeze |
COUNT_FROM_SELF_OPTS | = | [:distinct, :group, :sql, :limit, :compounds] |
COUNT_OF_ALL_AS_COUNT | = | SQL::Function.new(:count, LiteralString.new('*'.freeze)).as(:count) |
DATASET_ALIAS_BASE_NAME | = | 't'.freeze |
FOR_UPDATE | = | ' FOR UPDATE'.freeze |
IS_LITERALS | = | {nil=>'NULL'.freeze, true=>'TRUE'.freeze, false=>'FALSE'.freeze}.freeze |
IS_OPERATORS | = | ::Sequel::SQL::ComplexExpression::IS_OPERATORS |
N_ARITY_OPERATORS | = | ::Sequel::SQL::ComplexExpression::N_ARITY_OPERATORS |
NULL | = | "NULL".freeze |
QUALIFY_KEYS | = | [:select, :where, :having, :order, :group] |
QUESTION_MARK | = | '?'.freeze |
DELETE_CLAUSE_METHODS | = | clause_methods(:delete, %w'from where') |
INSERT_CLAUSE_METHODS | = | clause_methods(:insert, %w'into columns values') |
SELECT_CLAUSE_METHODS | = | clause_methods(:select, %w'with distinct columns from join where group having compounds order limit lock') |
UPDATE_CLAUSE_METHODS | = | clause_methods(:update, %w'table set where') |
TIMESTAMP_FORMAT | = | "'%Y-%m-%d %H:%M:%S%N%z'".freeze |
STANDARD_TIMESTAMP_FORMAT | = | "TIMESTAMP #{TIMESTAMP_FORMAT}".freeze |
TWO_ARITY_OPERATORS | = | ::Sequel::SQL::ComplexExpression::TWO_ARITY_OPERATORS |
WILDCARD | = | LiteralString.new('*').freeze |
SQL_WITH | = | "WITH ".freeze |
SQL fragment for specifying given CaseExpression.
# File lib/sequel/dataset/sql.rb, line 229 229: def case_expression_sql(ce) 230: sql = '(CASE ' 231: sql << "#{literal(ce.expression)} " if ce.expression 232: ce.conditions.collect{ |c,r| 233: sql << "WHEN #{literal(c)} THEN #{literal(r)} " 234: } 235: sql << "ELSE #{literal(ce.default)} END)" 236: end
SQL fragment for complex expressions
# File lib/sequel/dataset/sql.rb, line 249 249: def complex_expression_sql(op, args) 250: case op 251: when *IS_OPERATORS 252: r = args.at(1) 253: if r.nil? || supports_is_true? 254: raise(InvalidOperation, 'Invalid argument used for IS operator') unless v = IS_LITERALS[r] 255: "(#{literal(args.at(0))} #{op} #{v})" 256: elsif op == :IS 257: complex_expression_sql("=""=", args) 258: else 259: complex_expression_sql(:OR, [SQL::BooleanExpression.new("!=""!=", *args), SQL::BooleanExpression.new(:IS, args.at(0), nil)]) 260: end 261: when :IN, "NOT IN""NOT IN" 262: cols = args.at(0) 263: vals = args.at(1) 264: col_array = true if cols.is_a?(Array) || cols.is_a?(SQL::SQLArray) 265: if vals.is_a?(Array) || vals.is_a?(SQL::SQLArray) 266: val_array = true 267: empty_val_array = vals.to_a == [] 268: end 269: if col_array 270: if empty_val_array 271: if op == :IN 272: literal(SQL::BooleanExpression.from_value_pairs(cols.to_a.map{|x| [x, x]}, :AND, true)) 273: else 274: literal(1=>1) 275: end 276: elsif !supports_multiple_column_in? 277: if val_array 278: expr = SQL::BooleanExpression.new(:OR, *vals.to_a.map{|vs| SQL::BooleanExpression.from_value_pairs(cols.to_a.zip(vs).map{|c, v| [c, v]})}) 279: literal(op == :IN ? expr : ~expr) 280: else 281: old_vals = vals 282: vals = vals.to_a 283: val_cols = old_vals.columns 284: complex_expression_sql(op, [cols, vals.map!{|x| x.values_at(*val_cols)}]) 285: end 286: else 287: "(#{literal(cols)} #{op} #{literal(vals)})" 288: end 289: else 290: if empty_val_array 291: if op == :IN 292: literal(SQL::BooleanExpression.from_value_pairs([[cols, cols]], :AND, true)) 293: else 294: literal(1=>1) 295: end 296: else 297: "(#{literal(cols)} #{op} #{literal(vals)})" 298: end 299: end 300: when *TWO_ARITY_OPERATORS 301: "(#{literal(args.at(0))} #{op} #{literal(args.at(1))})" 302: when *N_ARITY_OPERATORS 303: "(#{args.collect{|a| literal(a)}.join(" #{op} ")})" 304: when :NOT 305: "NOT #{literal(args.at(0))}" 306: when :NOOP 307: literal(args.at(0)) 308: when 'B~''B~' 309: "~#{literal(args.at(0))}" 310: else 311: raise(InvalidOperation, "invalid operator #{op}") 312: end 313: end
SQL fragment specifying a JOIN clause without ON or USING.
# File lib/sequel/dataset/sql.rb, line 327 327: def join_clause_sql(jc) 328: table = jc.table 329: table_alias = jc.table_alias 330: table_alias = nil if table == table_alias 331: tref = table_ref(table) 332: " #{join_type_sql(jc.join_type)} #{table_alias ? as_sql(tref, table_alias) : tref}" 333: end
SQL fragment for a literal string with placeholders
# File lib/sequel/dataset/sql.rb, line 357 357: def placeholder_literal_string_sql(pls) 358: args = pls.args 359: s = if args.is_a?(Hash) 360: re = /:(#{args.keys.map{|k| Regexp.escape(k.to_s)}.join('|')})\b/ 361: pls.str.gsub(re){literal(args[$1.to_sym])} 362: else 363: i = -1 364: pls.str.gsub(QUESTION_MARK){literal(args.at(i+=1))} 365: end 366: s = "(#{s})" if pls.parens 367: s 368: end
SQL fragment for the qualifed identifier, specifying a table and a column (or schema and table).
# File lib/sequel/dataset/sql.rb, line 372 372: def qualified_identifier_sql(qcr) 373: [qcr.table, qcr.column].map{|x| [SQL::QualifiedIdentifier, SQL::Identifier, Symbol].any?{|c| x.is_a?(c)} ? literal(x) : quote_identifier(x)}.join('.') 374: end
Adds quoting to identifiers (columns and tables). If identifiers are not being quoted, returns name as a string. If identifiers are being quoted quote the name with quoted_identifier.
# File lib/sequel/dataset/sql.rb, line 379 379: def quote_identifier(name) 380: return name if name.is_a?(LiteralString) 381: name = name.value if name.is_a?(SQL::Identifier) 382: name = input_identifier(name) 383: name = quoted_identifier(name) if quote_identifiers? 384: name 385: end
Separates the schema from the table and returns a string with them quoted (if quoting identifiers)
# File lib/sequel/dataset/sql.rb, line 389 389: def quote_schema_table(table) 390: schema, table = schema_and_table(table) 391: "#{"#{quote_identifier(schema)}." if schema}#{quote_identifier(table)}" 392: end
This method quotes the given name with the SQL standard double quote. should be overridden by subclasses to provide quoting not matching the SQL standard, such as backtick (used by MySQL and SQLite).
# File lib/sequel/dataset/sql.rb, line 397 397: def quoted_identifier(name) 398: "\"#{name.to_s.gsub('"', '""')}\"" 399: end
Split the schema information from the table
# File lib/sequel/dataset/sql.rb, line 402 402: def schema_and_table(table_name) 403: sch = db.default_schema if db 404: case table_name 405: when Symbol 406: s, t, a = split_symbol(table_name) 407: [s||sch, t] 408: when SQL::QualifiedIdentifier 409: [table_name.table, table_name.column] 410: when SQL::Identifier 411: [sch, table_name.value] 412: when String 413: [sch, table_name] 414: else 415: raise Error, 'table_name should be a Symbol, SQL::QualifiedIdentifier, SQL::Identifier, or String' 416: end 417: end
The SQL fragment for the given window‘s options.
# File lib/sequel/dataset/sql.rb, line 425 425: def window_sql(opts) 426: raise(Error, 'This dataset does not support window functions') unless supports_window_functions? 427: window = literal(opts[:window]) if opts[:window] 428: partition = "PARTITION BY #{expression_list(Array(opts[:partition]))}" if opts[:partition] 429: order = "ORDER BY #{expression_list(Array(opts[:order]))}" if opts[:order] 430: frame = case opts[:frame] 431: when nil 432: nil 433: when :all 434: "ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING" 435: when :rows 436: "ROWS UNBOUNDED PRECEDING" 437: else 438: raise Error, "invalid window frame clause, should be :all, :rows, or nil" 439: end 440: "(#{[window, partition, order, frame].compact.join(' ')})" 441: end
On some adapters, these use native prepared statements and bound variables, on others support is emulated. For details, see the "Prepared Statements/Bound Variables" guide.
PREPARED_ARG_PLACEHOLDER | = | LiteralString.new('?').freeze |
Set the bind variables to use for the call. If bind variables have already been set for this dataset, they are updated with the contents of bind_vars.
# File lib/sequel/dataset/prepared_statements.rb, line 179 179: def bind(bind_vars={}) 180: clone(:bind_vars=>@opts[:bind_vars] ? @opts[:bind_vars].merge(bind_vars) : bind_vars) 181: end
For the given type (:select, :insert, :update, or :delete), run the sql with the bind variables specified in the hash. values is a hash of passed to insert or update (if one of those types is used), which may contain placeholders.
# File lib/sequel/dataset/prepared_statements.rb, line 188 188: def call(type, bind_variables={}, *values, &block) 189: prepare(type, nil, *values).call(bind_variables, &block) 190: end
Prepare an SQL statement for later execution. This returns a clone of the dataset extended with PreparedStatementMethods, on which you can call call with the hash of bind variables to do substitution. The prepared statement is also stored in the associated database. The following usage is identical:
ps = prepare(:select, :select_by_name) ps.call(:name=>'Blah') db.call(:select_by_name, :name=>'Blah')
# File lib/sequel/dataset/prepared_statements.rb, line 201 201: def prepare(type, name=nil, *values) 202: ps = to_prepared_statement(type, values) 203: db.prepared_statements[name] = ps if name 204: ps 205: end
Return a cloned copy of the current dataset extended with PreparedStatementMethods, setting the type and modify values.
# File lib/sequel/dataset/prepared_statements.rb, line 211 211: def to_prepared_statement(type, values=nil) 212: ps = bind 213: ps.extend(PreparedStatementMethods) 214: ps.prepared_type = type 215: ps.prepared_modify_values = values 216: ps 217: end
MUTATION_METHODS | = | QUERY_METHODS | All methods that should have a ! method added that modifies the receiver. |
identifier_input_method | [RW] | Set the method to call on identifiers going into the database for this dataset |
identifier_output_method | [RW] | Set the method to call on identifiers coming the database for this dataset |
quote_identifiers | [W] | Whether to quote identifiers for this dataset |
row_proc | [RW] | The row_proc for this database, should be a Proc that takes a single hash argument and returns the object you want each to return. |
Setup mutation (e.g. filter!) methods. These operate the same as the non-! methods, but replace the options of the current dataset with the options of the resulting dataset.
# File lib/sequel/dataset/mutation.rb, line 14 14: def self.def_mutation_method(*meths) 15: meths.each do |meth| 16: class_eval("def #{meth}!(*args, &block); mutation_method(:#{meth}, *args, &block) end", __FILE__, __LINE__) 17: end 18: end
Dataset graphing changes the dataset to yield hashes where keys are table name symbols and columns are hashes representing the values related to that table. All of these methods return modified copies of the receiver.
convert_types | [RW] | Whether to convert some Java types to ruby types when retrieving rows. Uses the database‘s setting by default, can be set to false to roughly double performance when fetching rows. |
Adds the given graph aliases to the list of graph aliases to use, unlike set_graph_aliases, which replaces the list. See set_graph_aliases.
# File lib/sequel/dataset/graph.rb, line 13 13: def add_graph_aliases(graph_aliases) 14: ds = select_more(*graph_alias_columns(graph_aliases)) 15: ds.opts[:graph_aliases] = (ds.opts[:graph_aliases] || (ds.opts[:graph][:column_aliases] rescue {}) || {}).merge(graph_aliases) 16: ds 17: end
Yields a paginated dataset for each page and returns the receiver. Does a count to find the total number of records for this dataset.
# File lib/sequel/extensions/pagination.rb, line 20 20: def each_page(page_size, &block) 21: raise(Error, "You cannot paginate a dataset that already has a limit") if @opts[:limit] 22: record_count = count 23: total_pages = (record_count / page_size.to_f).ceil 24: (1..total_pages).each{|page_no| yield paginate(page_no, page_size, record_count)} 25: self 26: end
Execute the SQL on the database and yield the rows as hashes with symbol keys.
# File lib/sequel/adapters/do.rb, line 175 175: def fetch_rows(sql) 176: execute(sql) do |reader| 177: cols = @columns = reader.fields.map{|f| output_identifier(f)} 178: while(reader.next!) do 179: h = {} 180: cols.zip(reader.values).each{|k, v| h[k] = v} 181: yield h 182: end 183: end 184: self 185: end
Allows you to join multiple datasets/tables and have the result set split into component tables.
This differs from the usual usage of join, which returns the result set as a single hash. For example:
# CREATE TABLE artists (id INTEGER, name TEXT); # CREATE TABLE albums (id INTEGER, name TEXT, artist_id INTEGER); DB[:artists].left_outer_join(:albums, :artist_id=>:id).first => {:id=>albums.id, :name=>albums.name, :artist_id=>albums.artist_id} DB[:artists].graph(:albums, :artist_id=>:id).first => {:artists=>{:id=>artists.id, :name=>artists.name}, :albums=>{:id=>albums.id, :name=>albums.name, :artist_id=>albums.artist_id}}
Using a join such as left_outer_join, the attribute names that are shared between the tables are combined in the single return hash. You can get around that by using .select with correct aliases for all of the columns, but it is simpler to use graph and have the result set split for you. In addition, graph respects any row_proc of the current dataset and the datasets you use with graph.
If you are graphing a table and all columns for that table are nil, this indicates that no matching rows existed in the table, so graph will return nil instead of a hash with all nil values:
# If the artist doesn't have any albums DB[:artists].graph(:albums, :artist_id=>:id).first => {:artists=>{:id=>artists.id, :name=>artists.name}, :albums=>nil}
Arguments:
# File lib/sequel/dataset/graph.rb, line 67 67: def graph(dataset, join_conditions = nil, options = {}, &block) 68: # Allow the use of a model, dataset, or symbol as the first argument 69: # Find the table name/dataset based on the argument 70: dataset = dataset.dataset if dataset.respond_to?(:dataset) 71: table_alias = options[:table_alias] 72: case dataset 73: when Symbol 74: table = dataset 75: dataset = @db[dataset] 76: table_alias ||= table 77: when ::Sequel::Dataset 78: if dataset.simple_select_all? 79: table = dataset.opts[:from].first 80: table_alias ||= table 81: else 82: table = dataset 83: table_alias ||= dataset_alias((@opts[:num_dataset_sources] || 0)+1) 84: end 85: else 86: raise Error, "The dataset argument should be a symbol, dataset, or model" 87: end 88: 89: # Raise Sequel::Error with explanation that the table alias has been used 90: raise_alias_error = lambda do 91: raise(Error, "this #{options[:table_alias] ? 'alias' : 'table'} has already been been used, please specify " \ 92: "#{options[:table_alias] ? 'a different alias' : 'an alias via the :table_alias option'}") 93: end 94: 95: # Only allow table aliases that haven't been used 96: raise_alias_error.call if @opts[:graph] && @opts[:graph][:table_aliases] && @opts[:graph][:table_aliases].include?(table_alias) 97: 98: # Use a from_self if this is already a joined table 99: ds = (!@opts[:graph] && (@opts[:from].length > 1 || @opts[:join])) ? from_self(:alias=>options[:from_self_alias] || first_source) : self 100: 101: # Join the table early in order to avoid cloning the dataset twice 102: ds = ds.join_table(options[:join_type] || :left_outer, table, join_conditions, :table_alias=>table_alias, :implicit_qualifier=>options[:implicit_qualifier], &block) 103: opts = ds.opts 104: 105: # Whether to include the table in the result set 106: add_table = options[:select] == false ? false : true 107: # Whether to add the columns to the list of column aliases 108: add_columns = !ds.opts.include?(:graph_aliases) 109: 110: # Setup the initial graph data structure if it doesn't exist 111: unless graph = opts[:graph] 112: master = alias_symbol(ds.first_source_alias) 113: raise_alias_error.call if master == table_alias 114: # Master hash storing all .graph related information 115: graph = opts[:graph] = {} 116: # Associates column aliases back to tables and columns 117: column_aliases = graph[:column_aliases] = {} 118: # Associates table alias (the master is never aliased) 119: table_aliases = graph[:table_aliases] = {master=>self} 120: # Keep track of the alias numbers used 121: ca_num = graph[:column_alias_num] = Hash.new(0) 122: # All columns in the master table are never 123: # aliased, but are not included if set_graph_aliases 124: # has been used. 125: if add_columns 126: select = opts[:select] = [] 127: columns.each do |column| 128: column_aliases[column] = [master, column] 129: select.push(SQL::QualifiedIdentifier.new(master, column)) 130: end 131: end 132: end 133: 134: # Add the table alias to the list of aliases 135: # Even if it isn't been used in the result set, 136: # we add a key for it with a nil value so we can check if it 137: # is used more than once 138: table_aliases = graph[:table_aliases] 139: table_aliases[table_alias] = add_table ? dataset : nil 140: 141: # Add the columns to the selection unless we are ignoring them 142: if add_table && add_columns 143: select = opts[:select] 144: column_aliases = graph[:column_aliases] 145: ca_num = graph[:column_alias_num] 146: # Which columns to add to the result set 147: cols = options[:select] || dataset.columns 148: # If the column hasn't been used yet, don't alias it. 149: # If it has been used, try table_column. 150: # If that has been used, try table_column_N 151: # using the next value of N that we know hasn't been 152: # used 153: cols.each do |column| 154: col_alias, identifier = if column_aliases[column] 155: column_alias = "#{table_alias}_#{column}""#{table_alias}_#{column}" 156: if column_aliases[column_alias] 157: column_alias_num = ca_num[column_alias] 158: column_alias = "#{column_alias}_#{column_alias_num}""#{column_alias}_#{column_alias_num}" 159: ca_num[column_alias] += 1 160: end 161: [column_alias, SQL::QualifiedIdentifier.new(table_alias, column).as(column_alias)] 162: else 163: [column, SQL::QualifiedIdentifier.new(table_alias, column)] 164: end 165: column_aliases[col_alias] = [table_alias, column] 166: select.push(identifier) 167: end 168: end 169: ds 170: end
Returns a paginated dataset. The returned dataset is limited to the page size at the correct offset, and extended with the Pagination module. If a record count is not provided, does a count of total number of records for this dataset.
# File lib/sequel/extensions/pagination.rb, line 11 11: def paginate(page_no, page_size, record_count=nil) 12: raise(Error, "You cannot paginate a dataset that already has a limit") if @opts[:limit] 13: paginated = limit(page_size, (page_no - 1) * page_size) 14: paginated.extend(Pagination) 15: paginated.set_pagination_info(page_no, page_size, record_count || count) 16: end
Create a named prepared statement that is stored in the database (and connection) for reuse.
# File lib/sequel/adapters/jdbc.rb, line 527 527: def prepare(type, name=nil, *values) 528: ps = to_prepared_statement(type, values) 529: ps.extend(PreparedStatementMethods) 530: if name 531: ps.prepared_statement_name = name 532: db.prepared_statements[name] = ps 533: end 534: ps 535: end
Translates a query block into a dataset. Query blocks can be useful when expressing complex SELECT statements, e.g.:
dataset = DB[:items].query do select :x, :y, :z filter{|o| (o.x > 1) & (o.y > 2)} order :z.desc end
Which is the same as:
dataset = DB[:items].select(:x, :y, :z).filter{|o| (o.x > 1) & (o.y > 2)}.order(:z.desc)
Note that inside a call to query, you cannot call each, insert, update, or delete (or any method that calls those), or Sequel will raise an error.
# File lib/sequel/extensions/query.rb, line 30 30: def query(&block) 31: copy = clone({}) 32: copy.extend(QueryBlockCopy) 33: copy.instance_eval(&block) 34: clone(copy.opts) 35: end
This allows you to manually specify the graph aliases to use when using graph. You can use it to only select certain columns, and have those columns mapped to specific aliases in the result set. This is the equivalent of .select for a graphed dataset, and must be used instead of .select whenever graphing is used. Example:
DB[:artists].graph(:albums, :artist_id=>:id).set_graph_aliases(:artist_name=>[:artists, :name], :album_name=>[:albums, :name], :forty_two=>[:albums, :fourtwo, 42]).first => {:artists=>{:name=>artists.name}, :albums=>{:name=>albums.name, :fourtwo=>42}}
Arguments:
# File lib/sequel/dataset/graph.rb, line 189 189: def set_graph_aliases(graph_aliases) 190: ds = select(*graph_alias_columns(graph_aliases)) 191: ds.opts[:graph_aliases] = graph_aliases 192: ds 193: end
These methods all execute the dataset‘s SQL on the database. They don‘t return modified datasets, so if used in a method chain they should be the last method called.